Advanced Topics

Connecting to MongoDB over SSL

JSON Studio can connect to a MongoDB server configured to accept SSL connections. If your server is configured to use SSL you should check “Use TLS/SSL” in the login screen. The Studio will then connect to the server using SSL and the session will be encrypted.

If you are using a valid certificate on the MongoDB server (i.e. signed by a CA known as a trusted signer) then the Studio will accept the certificate sent by the server and establish the connection. If the certificate on the server cannot be validated by the JVM the Studio will reject it and not establish the connection. If you want the connection to be made in this case too (in effect using just the encryption feature but not the authentication/trust feature to ensure you are indeed connected to a valid database server), check the “Accept any certificate” checkbox on the login screen and the Studio will establish the connection without being able to verify the certificate.

Determination of which certificate is trusted is made by the JVM you are using. To include self-signed certificates as trusted follow standard JDK/JRE instructions.

Connecting to MongoDB using Kerberos Authentication

JSON Studio can connect to a MongoDB server that uses Kerberos authentication.

Before version JSON Studio 1.4 this was only available for JSON Studio used as a single-user application. The user first authenticates with the KDC and a ticket is acquired for that session. JSON Studio runs within the same host/session and shares this ticket information. It is not possible in versions prior to 1.4 to have JSON Studio run as a multi-user system and accept connections remotely for different users each using different Kerberos credentials. If you are using JSON Studio prior to 1.4 in a Kerberized environment you need to ensure that each user runs their own instance of JSON Studio as a client-side application.

Starting from version 1.4, it is possible to have a shared JSON Studio server that allows multiple users to authenticate with different Kerberos credentials.

In order to have JSON Studio use Kerberos authentication you need to configure the realm and KDC. If you are performing a fresh install, answer the installer’s questions for these values. If you have already installed the Studio shutdown tomcat, edit the file bin/setenv.sh or binsetnev.bat in your installation directory to have the following line (adjust the realm and kdc values to your specific values), and restart tomcat.

On Unix:

export CATALINA_OPTS="$CATALINA_OPTS -Djava.security.krb5.realm=JSONAR.COM -Djava.security.krb5.kdc=ubuntu1204.jsonar.com -Djavax.security.auth.useSubjectCredsOnly=false"

On Windows:

set CATALINA_OPTS=-Djava.security.krb5.realm=JSONAR.COM -Djava.security.krb5.kdc=ubuntu1204.jsonar.com -Djavax.security.auth.useSubjectCredsOnly=false

When performing the login, check the “Kerberos authentication” checkbox to have the Studio use Kerberos credentials. The username you use will take the form of:

qakrb@JSONAR.COM

Do not specify a password. Such a user needs to be defined within MongoDB using $external, for example:

> db.addUser({"user":"qakrb@JSONAR.COM", "roles": ["read", "readWrite", "dbAdmin", "userAdmin"], "userSource":"$external"})
{
    "user" : "qakrb@JSONAR.COM",
    "roles" : [
        "read",
        "readWrite",
        "dbAdmin",
        "userAdmin"
    ],
    "userSource" : "$external",
    "_id" : ObjectId("52332c8f480b1bedf1a01b08")
}

Prior to version 1.4, if you have already authenticated with the KDC in a session before the login, eg:

$ kinit qakrb
Password for qakrb@JSONAR.COM:


$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: qakrb@JSONAR.COM

Valid starting    Expires           Service principal
13/11/2013 11:55  13/11/2013 21:55  krbtgt/JSONAR.COM@JSONAR.COM
    renew until 14/11/2013 11:55

then the Studio will connect. Otherwise, if the credential is not in klist yet, then the shell in which you ran startup.{sh|bat} will issue the Kerberos challenge, and after you authenticate, the Studio will continue to connect to the database.

As of version 1.4 the Studio does not use the credentials stored in the operating system session. Instead, pass the credential and password on the login screen or to the Gateway as you would pass a username/password using MongoDB credentials and make sure to check the Kerberos flag.

By default the installation assumes you are using Oracle’s JRE. If you are using someone else’s JVM and security libraries you will need to change the JAASLogin.config file in the conf directory. The default contents of this file are:

JAASLogin {
   com.sun.security.auth.module.Krb5LoginModule required;
};

As an example, if you are using IBM’s JRE change the contents to:

JAASLogin {
     com.ibm.security.auth.module.Krb5LoginModule required;
};

When connecting to MongoDB you do not need to enter any value in the Kerberos service field (leave it empty). When you connect to SonarW over Kerberos you need to enter sonarw in this field.

Using LDAP Authentication

JSON Studio supports LDAP authentication introduced in MongoDB 2.6. This is available only for Linux-based MongoDB systems and all JSON Studio installations (Linux, Windows and MacOS).

To authenticate using LDAP check the LDAP checkbox on the login screen.

Note that because LDAP authentication in MongoDB means that plaintext is sent over the wire, when you select LDAP authentication SSL is automatically selected for you as well. You may uncheck these and proceed to login if you do not have SSL, but for security reasons this is not recommended.

Credential mapping

In some cases you want users to be able to retrieve data but do not want to define all users in the database. This is true for both the Studio and the Gateway but is especially true for the Gateway.

To implement an environment in which many users access data you would define a few functional accounts at the database level, each having the credentials required; these can be viewed as roles. You would not give these IDs to individual users but instead would define a mapping, specifying per end-user which functional ID they should use (and thus which privileges they would have). The user would login using their own credentials to JSON Studio or the Gateway and the mapping would be used to retrieve the functional ID with which the connection is made to the database. All this is done transparently and the end-user would never have access to these IDs.

The implementation is based on a pluggable interface defined as:

package com.jsonar.util;

public interface CredentialMapper {

   /*
    * Return username and password to be used for connecting to the database.
    * Ususally used for mapping individual credentials to functional credentials used in the database with specific roles.
    * Input includes all parameters passed in by the user - not all of which need to have values (may have null) and not
    * all of which need to be used by the mapping.
    * Implementation should use organization's entitlement system and can also create an audit record in the organization's
    * preferred auditing system.
    *
    * Note - must always return a pair of strings; if no mapping is found should return the originals
    *
    */

   public String[] databaseCredentialFor(String userName, String pwd, String dbName, String host, int port, boolean isSSL, boolean isKRB);

}

The system ships with a trivial mapper that takes no action:

package com.jsonar.util;

public class DefaultCredentialMapper implements CredentialMapper {

   public String[] databaseCredentialFor(String userName, String pwd, String dbName, String host, int port, boolean isSSL, boolean isKRB) {
      String[] toRet = { userName, pwd};
      return toRet;
   }

}

To implement this functionality, implement a class that defines this interface using your preferred entitlements system. Add this Java class to the JSON Studio / Gateway classpath. Then define this class as the mapping class by editing web.xml in <install dir>/jsonar/sonarFinder/WEB_INF and replace your class name in the following:

<context-param>
   <param-name>jsonStudio.credentialsMapper</param-name>
   <param-value>com.jsonar.util.DefaultCredentialMapper</param-value>
</context-param>

It is recommended that when you implement the databaseCredentialFor method you not only retrieve the functional credentials but also write an audit record in your auditing system. Regardless, JSON Studio will also write out a login audit record which includes the username and the functional ID used to login to the database.

Central Datasource Management (aka support for Mongo-as-a-Service - MaaS)

JSON Studio supports configurations where datasources are maintained inside a MongoDB database/collection rather than locally in the browser’s local store. This is especially useful when clusters/instances are provisioned using a central automation system (MaaS) and datasources are configured centrally as part of this provisioning process (in addition to datasource creation by the user). The architecture for central datasource management is shown below:

_images/architectureMaas.jpg

When you configure JSON Studio for a MaaS environment you can connect using two methods:

1. You can continue to login normally by starting from login.jsf, or 2. You can navigate to maaslogin.jsf.

In the later case you will first supply your central username and password as shown below.

_images/maas1.jpg

These credentials are the ones used to connect to the datasource database shown in the previous image. This is a MongoDB database where datasources are maintained for users and can be written to by the central provisioning system, through APIs or through other management processes. Consider allowing connections to this database only from the management nodes and from the JSON Studio nodes through IP configurations or other mechanisms.

Datasources are maintained in a collection named lmrm__datasources and each document has the form of:

{
   "_id" : ObjectId("5471ffade4ea3fabc3401eea"),
   "name" : <datasource name>,
   "studio_uri" : <mongodb URI for the studio database - optional>,
   "uri" : <mongodb URI for connecting to the database - should not store passwords, use $$pwd instead>,
   "user" : <user name>,
   "type" : "F" | "U"
}

Note that the username in this document is the central username (e.g. the LDAP username for the user in the organization) whereas the username used to connect to the MongoDB database is specified as part of the URI field and does not need to be the same user. This is especially useful when you want to automatically create functional logins for users. Note that in the case for functional IDs you can include passwords in the URI (if for example the users do not have passwords for functional accounts). However, no encryption is used and the passwords are maintained in clear text in this collection making it even more important to limit the access to this database from JSON Studio or management systems only.

The type field has a value of either “F” or “U”. “U” is the default and implies that the user owns that data source and can make changes to it when using the “save as” field. Use “F” for functional accounts that the user should have access to - but the user cannot change these datasources and while their logins use the stored password, they do not see that password on screen.

When a user enters their username and password in the MaaS Login page, a connection to this database is made (the user is authenticated using their username and password) and all data sources are read for that user. These datasources are loaded into the pull-down on the main login page and the user can select that datasource, fill in the password and login to the cluster/instance as shown below.

_images/maas2.jpg

A few notes:

  • Since datasources are keyed by the username, this scheme works when each user in the organization has a unique username (e.g. using LDAP).
  • Each user that uses the MaaS infrastructure must be defined as a user with either read or readWrite privileges in the datasource database since they must be authenticated to extract the datasources. If only read privileges are assigned then the user may not create new datasources or modify datasources but may still use datasources. Note that normal, direct login (without the convenience of central datasource management) is always available no matter what.
  • When the “save datasource” checkbox is checked in MaaS login, changes to the datasource or new datasources are saved to the central datasource store.
  • This scheme is supported for MongoDB-CR, Kerberos and LDAP authentication only. However, a consistent scheme should be used - for example, if you use LDAP authentication for your database you also need to use LDAP authentication for your Studio DB.
  • Any type of MongoDB setup can be used for datasource persistence - including (and preferably) a replica set to ensure availability.
  • Because the scheme relies on MongoDB persistence you have full flexibility to decide where to install instances of JSON Studio. You can have a central server, servers in each data center, servers per departments etc. The key is that all JSON Studio servers are configured to point to this datasource store.

To setup JSON Studio for centralized datasource management, edit web.xml in the jsonar/sonarFinder/WEB-INF directory and change the following entries:

<!--  Put $$user and $$pwd for placeholders for username and password -->
<context-param>
 <param-name>jsonStudio.maasURI</param-name>
 <param-value>mongodb://$$user:$$pwd@hostname/lmrm</param-value>
</context-param>
<context-param>
 <param-name>jsonStudio.maasDB</param-name>
 <param-value>lmrm</param-va
</context-param>

The massURI parameter is a MongoDB URI that points at the instance (or replica set) where the datasources are stored and the massDB parameter is the database where the lmrm__datasources collection resides. The URI must have $$user and $$pwd since these are replaced with the username and password supplied by the user to login to the datasource store. Note that the database in the maasURI is the database against which authentication occurs and massDB is the database where the find/update occurs.

As another example, to setup this central store using LDAP (and an instance running on an EC2 machine):

<context-param>
   <param-name>jsonStudio.maasURI</param-name>
   <param-value>mongodb://$$user:$$pwd@ec2-54-164-11-22.compute-1.amazonaws.com:27017/lmrm?authMechanism=PLAIN&#38;authSource=$external</param-value>
  </context-param>
  <context-param>
   <param-name>jsonStudio.maasDB</param-name>
   <param-value>lmrm</param-value>
  </context-param>

Mongo-as-a-Service Redirects

Another integration method for organizations that deploy centralized MongoDB portals is URL integration. In this pattern you can link a JSON Studio login page to a link on your portal and pass in parameters used to connect to the database (which may exist within your portal).

To pass connection information open a browser page from your portal using URL query parameters shown by the following example:

https://<your JSON Studio host>:8443/login.jsf?maas=1&u=<username>&p=<mongodb
port>&host=<mongodb database>&sdb=<Studio DB>&db=<MongoDB database>

Custom Auditing

The CredentialMapper shown above also includes functions for custom logging/auditing:

// For Enterprise audit trail
public void auditLogin(DBObject dbo);
public void auditLogout(DBObject dbo);

The default credential mapper does nothing in these functions. Implement these methods in your mapper if you wish the audit trail to be logged to a specialized audit trail.

Connecting to a replica set

In order to connect to a replica set insert all the hostnames/ports into the hostname field in the login screen. e.g.:

54.196.138.247:27017 54.196.139.172:27017 54.196.124.15:27017

or:

mongo0.yourdomain.com:27017 mongo1.yourdomain.com:27017 mongo2.yourdomain.com:27017

When you connect to the replica set, the driver decides who the primary server is and connects to it. You do not need to provide all hostnames, but if the current primary is not one of them then the connection fails.

In version 1.3.1 a “Prefer secondary” flag was added to the logon screen allowing you to connect to the secondary instead. This is also available for Gateway connections using the secondaryPref parameter.

If the primary is no longer accessible, you may get a database error. In this case, click on the logout link. The login screen will already be populated with the sessions information and when you click on the login button the driver will connect to one of the secondaries that has now become the primary. The Studio will preserve your state and will be connected to the new primary.

SSL Certificates for JSON Studio

JSON Studio comes with an unsigned certificate that does not match the host where you will be running the Studio, the period etc. Therefore, when you use a browser to connect to the studio the browser warns you of that fact and that you should not proceed with this certificate unless you know that you can assume this risk. This happens because the browser cannot verify the authenticity of this certificate. In most companies, if the application is used internally, such risk is acceptable. When you proceed and accept the risk (assuming you are using JSON Studio within your enterprise) the session is established and will be encrypted.

JSON Studio runs completely within Apache Tomcat and the SSL configuration is Tomcat’s standard configuration. If you want to generate a true certificate for your host (either using a self-signed certificate if you have a CA that you use within the company) or use one of the standard CAs and purchase certificates, follow the instructions at tomcat_ssl to install your certificate for the Studio’s Tomcat.

Running JSON Studio without SSL

By default all access to JSON Studio and Sonar Gateway is through SSL. You can hoewever disable the SSL layer if you use another SSL termination mechanism. To do so edit <install dir>/jsonar/sonarFinder/WEB-INF/web.xml and replace CONFIDENTIAL with NONE in the security contraint section, to read:

<security-constraint>
    <web-resource-collection>
      <web-resource-name>Secured</web-resource-name>
      <url-pattern>/*</url-pattern>
    </web-resource-collection> <user-data-constraint>
      <transport-guarantee>NONE</transport-guarantee>
    </user-data-constraint>
</security-constraint>

Edit <install dir>/conf/server.xml and remove the redirect and the SSL connector and leave only an HTTP connector, i.e.:

<Connector port="8080" maxThreads="150" scheme="http" secure="false" />

Then restart the tomcat server.

Working with the Debug Log

JSON Studio writes its logs in JSON format and places it in the logs directory in your installation folder. By default logging is turned off. You may be asked by a jSonar support representative to enable logging to troubleshoot a certain condition.

Turn on logging by navigating to the following URL using your browser:

https://<your hostname>:<8443 or your port number>/holterMonitor?action=set_debug&level=0

Debug levels are between 0 and 10 with 10 being the most verbose. Your support representative will tell you what level to turn it to. When done, turn it back to 0 to prevent unnecessary overhead.

Increasing the debug level starts the log collection process but does not write it to disk. After you complete your debugging session and have reproduced the issue dump the file to the logs directory by navigating to:

https://<your hostname>:<8443 or your port number>/holterMonitor?action=dump

A file is also written if the Studio experiences a problem while it is in logging mode (at any verbosity level).

Files older than 7 days are deleted automatically by the Studio when it is running.

When you navigate to these URLs you must authenticate to JSON Studio using the key you provided for generating the hash that is in your license file. Use the passcode that you provided to jSonar when you received your license files (the license file contains a hash of this passcode that only you know). The debug log will not contain any data - only traces of the software’s entry points. If you are running an eval version this passcode is “eval” (without the quotes). As an example, you would use the following URL:

https://<your hostname>:<8443 or your port number>/holterMonitor?action=dump&hash=eval

To generate this hash for inclusion in your license files go to the installation directory and under it to:

jsonar/sonarFinder/WEB-INF/classes

and run:

java -classpath . com.jsonar.util.DebugHashGenerator <your passcode>

This will create a hash value that you should provide to jSonar for inclusion in your license files.

The code for this class is simply:

public static void main(String[] args) {
      // args[0] will be the string
      try {
            MessageDigest md5 = MessageDigest.getInstance("MD5");
            String hex = (new HexBinaryAdapter()).marshal(md5.digest(args[0].getBytes()));
            System.out.println(hex.toLowerCase());
         } catch (Exception ex) {
            ex.printStackTrace();
         }
 }

If you do not want this functionality enabled, ask jSonar to create a random hash and since a one-way hash function is used it is impossible to know what passcode matches this hash.

The system also implements a local authentication exception - if your browser is on the same host where JSON Studio is running then you can enable these logging features without being authenticated.

Inactivity Timeout

JSON Studio will log you out after a period of inactivity. By default the timeout period is 30 minutes.

You can modify the default timeout period. To change it, shutdown your Tomcat server, edit <install dir>/jsonar/sonarFinder/WEB-INF/web.xml and change the setting in the line shown below and restart Tomcat:

<session-timeout>30</session-timeout>

The parameter is in minutes. Two minutes before the timeout a dialog will warn you.

Displaying Document Creation Date/Time from oid

ObjectIds (the default ID used in the _id) field by MongoDB embed within them the second in which the document was created. This information is sometimes useful for users who want to see the document creation time.

JSON Studio allows you to display this information when documents are returned in a result set. By default this information is not presented but you can present it using the Preferences option called “Create derived date fields from _id ObjectId” available in the Presentation section in either the Finder or the Analytics applications. If you set this preference to 1 then all documents displayed will include two additional fields called lmrm__oid_date and lmrm__oid_time (the first is an ISODate and the latter is a long). For example:

{
 "_id": {
     "$oid": "52430e367db75d44d25711ee"
 },
 "continent": "Asia",
 "country": "Azerbaijan",
 "gdp": 68.804,
 "balance": 20.308,
 "population": 9.235,
 "lmrm__oid_date": {
     "$date": "2013-09-25T16:24:22.000Z"
 },
 "lmrm__oid_time": 1380126262
 }

When the preference is set to 1 these fields will appear in any display - JSON, tabular, reports through the Gateway, graphs in the Studio and through the Gateway etc. However, this is purely a front-end display - these fields do not really exist in the collection and MongoDB does not know about these fields. Therefore, you cannot use them for filtering, projection or sort (MongoDB will just ignore them because they are not there), you cannot use them in aggregation pipeline in the Analytics etc. Also, these fields are only constructed for the _id field and only is that field is an ObjectId; it will not work for example on any other field or any other level.

Sometimes you do want to query and slice/dice on these values. For example, you might want to report on how many documents are being created every hour or day. In this case you should consider adding a field to the documents when you first insert the document with an ISODate.

If you do not have that option or already have documents that were inserted and do not have that field you can materialize these fields using JSON Studio. This functionality is available only where JSON Studio inserts documents into a collection - specifically in the Finder when you save a result of a query into a collection, in the Gateway when you use output=collection.<col name> and in the Analytics (Dicer) when you are not using $out because the results of the pipeline are inserted into a temporary collection including copying it to a collection you name.

If the flag is set to 1 when you invoke any of these writing operations then when documents are inserted or updated they will include two additional fields (mat for materialized):

"lmrm__oid_date_mat": {
    "$date": "2013-09-25T16:24:22.000Z"
},
"lmrm__oid_time_mat": 1380126262

Using these new collections you can now run aggregations or queries using these dates (e.g. using MongoDB data operators in an aggregation group stage).

Note that the _mat fields are real MongoDB data fields and exist in your collections and can be used in any place a regular field is used whereas the lmrm__oid_time and lmrm__oid_date fields do not really exist and are for display only. Also note that when you query a collection that has materialized fields you will also see the display fields (if the flag is set to 1) so that a document might look like:

{
  "_id": {
    "$oid": "52430e367db75d44d25711e7"
    },
    "continent": "Asia",
    "country": "Afghanistan",
    "gdp": 19.906,
    "balance": 4.017,
    "population": 32.017,
    "lmrm__oid_date_mat": {
    "$date": "2013-09-25T16:24:22.000Z"
    },
    "lmrm__oid_time_mat": 1380126262,
    "lmrm__oid_date": {
    "$date": "2013-09-25T16:24:22.000Z"
    },
    "lmrm__oid_time": 1380126262
  }

Custom Operators

When you use a $match stage in aggregation pipelines or when you use the Finder to build find() queries, you can add conditions by selecting an operator from the Filter Op menu button and then double clicking on a field or value to add the condition.

The filter operators available within the Studio include commonly used operators such as equal, exists, larger than, smaller than etc. as well as a LIKE operator (for regex matching) and even specialized operators such as SUBQUERY which is a JSON Studio-specific operator. In each case when you double click on a field or value a condition is added. The format of the condition depends on the operator. For example, a <= operator will add something like either “customerNumber” : {$lte: 103 } or “customerNumber” : {$lte: <enter value> } depending on whether you double click on the field or on the value. But in both cases there is a template for the condition where values for the fields name and the value are plugged.

In version 2.5 of the Studio there were 13 operators in the menu button drop down and this number has been steadily increasing as users request support for operators that they use often. However, MongoDB has many tens of operators so in version 2.6 a new feature was introduced that allows you to add you own operator templates. Additionally, these operators can be used in projection and grouping in addition to filtering (match).

Custom operators are string templates that are defined as an alias. You can enter any template string as an alias. In order for it to be an operator, the alias name must start with the string lmrm__cust_op_. The rest of the alias name is the string that will appear in the menu drop down. For example, to add an operator called OP1, add an alias by the name lmrm__cust_op_OP1. The value of the alias is the string template. This string will be added to the condition upon a double click. If you have $$path in this template then it is replaced with the field name upon the double click. If there is a $$value in this template then it is replaced by the value (assuming you clicked on the value and not the field name).

When you save the aliases to the database after adding a new custom operator hit the refresh button of the browser and the application will load with the new custom operator added to the menu drop down.

Let’s look at two examples - a filter example and a projection example.

For the filter example, assume that we have documents that have a YEAR field and we often do queries to look at documents starting with a certain year. That’s easy because we already have a >= operator that creates a $gte expression. But now assume that we also have documents in the collection which are missing the YEAR field and if that’s the case we do want them to show up (perhaps because we want to clean this up or because empty means “this year”). We can do this with two double-click conditions or a $and operator but it’s quite a bit of typing especially if we do it all the time. Instead, let’s construct a custom operator.

In the Alias Builder create an alias with the name:

lmrm__cust_op_>= OR NULL

This means that in the drop down the operator will show up as “>= OR NULL”.

For the value of the alias put:

$or: [{$$path: {$gte: $$value}},{$$path: {$exists: false}}]

Now save the aliases and refresh the browser window. A new menu item will appear. When you select it, it will show the template (with a CUSTOM prefix for clarity). If you now double click on a JSON document on the 2013 value for the YEAR field, the added condition is:

$or: [{YEAR: {$gte: 2013}},{YEAR: {$exists: false}}]

Now let’s look at a projection example. Suppose you have data where you often have pairs of numbers where one is a part of the other and it’s very common for you to want to get the percentage value. This is so common that you don’t want to type the projection every time but want to create a custom projection that you can use.

In the Alias Builder create an alias with the name:

lmrm__cust_op_%

and for the value of the alias put:

percent: {$multiply: [100, {$divide: ["$$$path", "$OTHER FIELD"]}]}

If your document includes two fields called ACTUAL_ELAPSED_TIME and CRS_ELAPSED_TIME then double clicking on the ACTUAL_ELAPSED_TIME field will yield:

percent: {$multiply: [100, {$divide: ["$ACTUAL_ELAPSED_TIME", "$OTHER FIELD"]}]}

which you would manually edit and change to:

percent: {$multiply: [100, {$divide: ["$ACTUAL_ELAPSED_TIME", "$CRS_ELAPSED_TIME"]}]}

(and possibly also change percent to a more meaningful field name if you will compute a number of percentage values).

Table Of Contents

Previous topic

JSON Studio configuration options

Next topic

Release Notes for JSON Studio

Copyright © 2013-2016 jSonar, Inc
MongoDB is a registered trademark of MongoDB Inc. Excel is a trademark of Microsoft Inc. JSON Studio is a registered trademark of jSonar Inc. All trademarks and service marks are the property of their respective owners.