Frequently asked questions (FAQ)

Contents

General

What ports does JSON Studio use?

JSON Studio is a Tomcat-based Web application that uses ports 8080 and 8443 as defaults but you specify these ports when you install JSON Studio and you can modify them at any point by editing conf/server.xml. In reality, it only uses port 8443 and must be used over SSL for securing sensitive information that may be within the database. When you access the Studio using http and port 8080 it redirects to HTTPS and port 8443. Therefore, if you do not want to use two ports you can edit conf/server.xml and remove the following line:

<Connector port="8080" enableLookups="false" redirectPort="8443" />

I’m using Internet Explorer and point-and-click does not work

Internet Explorer has an accelerator button that “hijacks” double-click events. If you are using IE and see a little blue arrow when you double click, you should follow the instructions here and disable the accelerator button.

Can I run JSON Studio on a WAN?

Yes. JSON Studio runs in a browser and uses AJAX to do partial-screen refreshes. It is not a problem to use a remote browser over a WAN to the Studio. The connection between the Studio and the MongoDB databases can also be over a WAN but some applications go to the database many times and over a WAN the latency can add up. The Aggregation Builder is the application that can suffer the most if the network latency between the Studio and the database is high - especially when using MongoDB 2.4 (version 2.6 has an $out aggregation option that can be used to avoid the result inserts). The Spreadsheet Bridge too can be affected especially when importing large spreadsheets because every row in the spreadsheet causes an update command to be performed. Nevertheless, the Studio can be used in such scenarios - operations just take longer.

What does this message mean? “Possibly multiple windows/tabs open - close all JSON Studio windows/tabs but one.”

JSON Studio applications share a single context. This allows cross-application navigation that preserves full context. For example, you might be querying a collection in the Finder and then navigate to the Spreadsheet Bridge to export the data to Excel – all the time preserving the query you constructed in the Finder using facets.

As a result of this shared context, it is not recommended to have more than one browser tab or window for JSON Studio (the Schema Analyzer and the Differ are exceptions - you can keep them open in parallel). You should also not use the browser’s back, forward and refresh buttons - navigate using the Studio’s navigation links.

Nothing bad will happen if you do open more than one but the results may be confusing. For example, if you have two tabs open on the same application and there is a cursor open, the location of the cursor is shared. In this case, if you click next in one, it affects the other as well. Then, if you progress the cursor in another tab you may be surprised that you “jumped ahead” more than you expected.

While nothing bad will happen if you do have multiple tabs/windows on the same application, the application warns you with this error if it thinks that you do have more than one open; if indeed you have multiple such tabs open consider closing them and in any case you can dismiss the warning.

The error will be displayed only once per application and will not recur (for that application) once you dismiss it. Note that it this error will also happen if you refresh the page since the server has no way to know that it is the same page versus a new tab/window.

Why can’t I use the browser’s back and forward buttons?

There are two reasons. The first is that this will cause the Studio to think there are two window/tabs open and you will get the message from the previous FAQ question. The more important reason is that the Studio uses a nonce mechanism for security in order to combat a well-know attack method called Cross-Site Request Forgery (CSRF). When nonces are used, the back and forward buttons should not be used. It is true that this is a bit annoying but since most enterprises require all applications to be CSRF-resistant and since security is of utmost importance when dealing with data, this is a small price to pay and once you get used to using the navigation buttons you will not even miss the back and forward buttons.

How do I use regular expressions in queries?

For the most part, use the MongoDB $regex operator. For example, to match case-insensitive “love” within the field called “text” do:

text: {$regex: "love", $options: "i"}

This is true for the Aggregation Builder, Spreadsheet Bridge and the Additional where in the Finder. The only exception to this rule is the facet search widgets in the Finder which support the syntactic sugar approach used in MongoDB’s shell allowing you to enter:

/love/i

What’s the difference between cursors and chunks?

When you work with collection viewers there are cursors and there are chunks as shown below:

_images/faq1.jpg

The JSON display in collection viewers show you a number of documents as set in the “Show” pull down. Use the cursor controls to progress through the collections and show the next document(s). Since each document (or set of documents) can themselves be very large, at each point in time the viewer shows 64K characters. Use the chunk controls (up/down triangle) to progress through the text of the document(s).

I get a database error saying something about a cursor not found on server

MongoDB cursors are closed by the server after 10 minutes. While it is possible for an application (like JSON Studio) to open a cursor without a timeout, this is considered irresponsible since the database may run out of resources if the application fails. Therefore, all cursors used by JSON Studio may be closed by the server and if you are still using it you will get an error of the form:

cursor 221064784718184004 not found on server mongo03.jsonar.com/192.168.0.110:27017

When this happens JSON Studio opens a new cursor for you or you can re-submit your search - so there is no impact to your work.

If you are using a long-running operation such as a Gateway call, the SonarDiffer, creating a join collection etc., consider running multiple runs for very large collections. You can use a range-based query on one of the indexed fields for example to ensure that each run does not take more than 10 minutes and the cursors are not invalidated.

I get a Java out of memory error or heap size error

JSON Studio runs as a Java application within Tomcat. The default setting for the Java heap size is 1024M. If you are working with very large documents the application may run out of memory when it tries to load one document (for example, to display the document within the collection viewer). In such a case you can increase the maximal heap size as follows:

Shutdown your server using stop_sonar_finder.{sh|bat} depending on your operating system.

In setenv.{sh|bat} in the bin directory is the setting for the maximal heap size:

$ cat setenv.sh
export CATALINA_PID="$CATALINA_BASE/tomcat.pid"
export JAVA_OPTS="-server -Xmx1024m"
$ cat setenv.bat
set CATALINA_PID=%CATALINA_BASE%\tomcat.pid
set JAVA_OPTS=-server -Xmx1024m

Change 1024m to 2048 to increase the heap size to 2GB, save the file and restart Tomcat using start_sonar_finder.{sh|bat}.

I’m running the Studio on a desktop that does not have a lot of memory

JSON Studio runs as a Java application within Tomcat. The default setting for the Java heap size is 1024M but can easily run with 512M with normal-size collections. You can decrease the maximal heap size as follows:

Shutdown your server using stop_sonar_finder.{sh|bat} depending on your operating system.

In setenv.{sh|bat} in the bin directory is the setting for the maximal heap size:

$ cat setenv.sh
export CATALINA_PID="$CATALINA_BASE/tomcat.pid"
export JAVA_OPTS="-server -Xmx1024m"
$ cat setenv.bat
set CATALINA_PID=%CATALINA_BASE%\tomcat.pid
set JAVA_OPTS=-server -Xmx1024m

Change 1024m to 512 to decrease the heap size to 0.5GB, save the file and restart Tomcat using start_sonar_finder.{sh|bat}.

I get a java.net.ConnectException when shutting down the Studio

When shutting down the Studio you might see a connection error as follows:

$ ./stop_sonar_finder.sh
Using CATALINA_BASE: /home/qa/sonarFinder
Using CATALINA_HOME:  /home/qa/sonarFinder
Using CATALINA_TMPDIR: /home/qa/sonarFinder/temp
Using JRE_HOME:    /usr/lib/jvm/java-6-openjdk-amd64/jre/
Using CLASSPATH:
/home/qa/sonarFinder/bin/bootstrap.jar:/home/qa/sonarFinder/bin/tomcat-juli.jar
Using CATALINA_PID:    /home/qa/sonarFinder/tomcat.pid Sep 28, 2013 3:48:12 AM
org.apache.catalina.startup.Catalina stopServer
SEVERE: Could not contact localhost:8005. Tomcat may not be running. Sep 28,
2013 3:48:13 AM org.apache.catalina.startup.Catalina stopServer
SEVERE: Catalina.stop:
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:385)
at java.net.Socket.connect(Socket.java:546)
at java.net.Socket.connect(Socket.java:495)
at java.net.Socket.<init>(Socket.java:392)
at java.net.Socket.<init>(Socket.java:206)
at org.apache.catalina.startup.Catalina.stopServer(Catalina.java:500)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.catalina.startup.Bootstrap.stopServer(Bootstrap.java:371)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:458)
Tomcat did
not stop in time. PID file was not removed. To aid diagnostics a thread dump
has been written to standard out.

This happens when Tomcat has not yet fully completed its launch and a shutdown is attempted before it has completed. The full launch may take as long as a minute (even if the application is working and can be used almost immediately). One of the last things to be initiated is the shutdown server that listens on port 8005 and is used by the shutdown script. You can tell whether it is up and running by doing a netstat and seeing if there is a LISTEN on port 8005.

When you do get this error message you can do one of two things. You can wait for a minute and then perform the shutdown cleanly or you can manually kill the Tomcat process (on *nix using a kill command with the PID). There is no damage done to the server, application or data in either case.

“Tomcat did not stop in time” message in the catalina log file

A message of the form “Tomcat did not stop in time. PID file was not removed” can appear in the catalina log files when the Tomcat server does not shut down fast enough when running stop_sonar_finder.{sh|bat}. This is mostly a benign message that usually means that the server was shutdown but perhaps did not do so fast enough. You can tell whether it is still running or not by looking at the process list (e.g. ps on *nix). The default time that the script waits is 5 seconds. If you want to get rid of this message (when it is just taking a bit longer) add a longer timeout value to <install dir>/stop_sonar_finder.sh by changing the last line from:

$CATALINA_HOME/bin/shutdown.sh

to:

$CATALINA_HOME/bin/shutdown.sh 10

What are all these collections that start with lmrm__?

The Studio needs to persist data for it’s operations. Examples include saves searches, saved graphs, saves preferences and more. All such data is stored in various collections in the Studio database and all these collections have a prefix of lmrm__.

What is lmrm__audit_trail?

lmrm__audit_trail is a collection in the Studio database that maintains an audit trail for login and logout to JSON Studio. It does not replace a full audit trail that you can get at the database level but is useful for some compliance needs. An example of an audit record is:

{
   "_id" : ObjectId("526297767eaaf9325e6a8c75"),
   "event" : "LOGIN",
   "app" : "JSON Studio",
   "username" : "qa",
   "host" : "localhost",
   "real_host" : "localhost",
   "db" : "lmrm",
   "studioDb" : "",
   "studio_host" : "qa.jsonar.com",
   "acceptAnyCert" : false,
   "port" : "27017",
   "real_port" : 27017,
   "KRB" : false,
   "login_time" : ISODate("2014-07-03T23:30:53.045Z"),
   "logout_time" : ISODate("2014-07-03T23:31:11.757Z"),
   "client_ip" : "192.168.56.101"
}

All times are in UTC. Upon login the record will not have a logout_time field; this is populated during the logout or the time that the session expires.

What does lmrm stand for?

Lean-Mean-Reduction-Machine - what our flagship product SonarW is really good at. Yes - we know it’s too geeky for words so we pretty much ignore where it came from and it’s just a prefix now.

I’m getting an error that I’ve exceeded my license but I don’t think I am.

JSON Studio is licensed by concurrent users. For example, if you have a license for 10 concurrent users then you can have up to 10 sessions where a user is connected to a database through the Studio. When the 11th user tries to login, they will get a licensing error. As soon as one of the 10 working users logs out the 11th user can sign in.

A session is reclaimed and released back to the available pool when a user logs out. If a user has a 30 minute period of inactivity their session is reclaimed and they are automatically logged out.

Because the interface to the Studio is a browser, a user can close the browser and not logout of the application. If many users do this at the same time their sessions are still being “counted” and users trying to connect may get the license error. This will last only until the 30 minute period per such session occurs - after which the session will be reclaimed.

“Exceeded license of -1 currently connected 0” on the Tomcat console.

This happens if the license file is corrupt. Shutdown the Studio using stop_sonar_finder.{sh|bat}, copy the backup license file or get a new license file from your jSonar account representative, and start the Studio using start_sonar_finder.{sh|bat}

Can I connect using users defined in the admin database?

Yes. When you login to the Studio you provide a set of credentials, a database name and potentially a database name as the Studio database. First, the Studio tries to authenticate with the database directly and a sample command attempted. However, a user may have been defined in the admin database with the “Any” roles (e.g. readAnyDatabase, readWriteAnyDatabase) as opposed to being defined in the database directly. Therefore, if the direct authentication fails the Studio tries to authenticate with the admin database and if that succeeds, tries to use the database, and establishes the session.

Opening a tabular display on the Finder on the Aggregation Builder fails.

The “open tabular display” button on the Finder or Aggregation Builder use Google Visualization libraries and only work when connected to the Internet. If you are working offline a tab/window opens but it remains empty. If you are working offline consider using the Visualizer for charting and the Spreadsheet Bridge for tabular displays - these work both online and offline (and also provide more functionality).

Finder

When I copy/paste queries shown in the Finder to the shell I sometimes get errors

The MongoDB shell has special syntax to handle various BSON types and the query shown on the Studio Finder is more suited to be used within MongoDB drivers. Therefore, if you want to copy paste to the shell you might have to modify the query syntax to handle BSON types.

As an example, if you have a date field within your document, a typical query that may be contsructed based on facets or point-and-click might look like:

db['foo'].find({
   "real_date": {
      "$date": "2013-08-25T22:42:39.054Z"
   }
})

If you run this in the shell you will get an error:

> db['foo'].find({
...  "real_date": {
...   "$date": "2013-08-25T22:42:39.054Z"
...   }
...  })
error: { "$err" : "invalid operator: $date", "code" : 10068 }

Instead, use the shell syntax:

> db['foo'].find({  "real_date": ISODate("2013-08-25T22:42:39.054Z")  })

How do the sort and limit work in the Studio?

When you issue a sort (either in the Finder or in the Spreadsheet Bridge), the Studio will automatically add a limit clause. This is so that the query does not negatively impact the database. The Studio will check your preferences and look for the parameter controlling the limit on sorted results. This is a numeric parameter and the Studio will add a limit(<param>) clause to the end of your query. The default for this value is 1000 but you can change this. If you change this to -1 then the Studio will no longer put an automatic limit – do this only if you plan on sorting on an indexed field.

The facet menus seem to be only leaf nodes - why?

Facets are populated with leaf nodes to allow simple selection. When you have menus with samples it is only possible to represent primitive values (like strings, numbers etc) and hence only leaf nodes come up. However, for the SELECT and the SORT search widgets you can also specify non-leaf nodes by adding the colon where appropriate. This is possible because these widgets only accept 1 and -1. For example, assume that your documents have a user that is a sub-document with a first_name and a last_name. The facet menus will show you user.first_name and user.last_name but not user alone. However, as you are typing feel free to type in user: 1 or user: -1 (in the SORT widget) and the Finder will do the rest.

Why do I need a parameter that controls the depth for populating facets?

There is a non-trivial trade-off here. On the one hand, the facet menu can include all fields (i.e. any depth level) so that you can choose any field from a facet. But when the documents get very complex and have hundreds and sometimes thousands of entries, these menus become unwieldy - even though they cascade. Since there can be different scenarios, rather than hard-code the depth, it is exposed as a preference which you can adjust. The most common values seen with users until now have been between 2 and 4 (inclusive).

I changed the facet depth preference and nothing happened.

When you change the depth parameter you need to let the Finder rebuild the facets. De-select the collection from the left pane and re-select it - the facets will be rebuilt using the new value for depth.

I did a text search, clicked “next” on the cursor and the results were cleared

Collection viewer and their cursors work on results of a find. Results of a text search have a different structure and as such behave differently. When you use the Finder (with facets for example) you have values that are used in the find as query operators (selectors) and you have the text search. You also have two separate search buttons on the screen - the “Execute search” and the “Execute text search” buttons.

When you execute a regular search the Studio runs a regular find, the results are shown the collection viewer, and the cursor iterates on the results. When you click a text search the result is based on both the find operators and on the text you are searching for (using the text search index). The results are not a collection of documents but rather the top 100 matches.

If you select the following for example:

_images/faq2.jpg

and then execute a text search the result will look like:

100 results: [
   { "score": 0.6666666666666666, "obj": { "_id": { "$oid": "51a41657667c264fead19bac" }, "text": "that geico commercial was great" } },
   { "score": 0.6, "obj": { "_id": { "$oid": "517ff6baef86464d4ffd4f61" }, "text": "@_FLAWLESSnBROWN great... Still got time then" } },
   ....
]

The entire result set is shown and the cursor is not needed. However, since the query does include projection and selection documents, the collection viewer is still operational and works using a standard find operation - without the text search component. Therefore, in the example shown above, if you click next on the cursor you will get the following:

{ "_id": { "$oid": "517b0950ef8699e2533ae674" }, "text": "I like that Geico commercial with the Pillsbury doughboy in it" }

which in this case includes the next document resulting from the find with the projection and selection but not the text search component.

What rows are shown by the “Open Tabular” button?

When you click on the “open tabluar” button in the Finder’s (or Analytics Builder’s) result set collection viewer, a table is created. The first row that this table shows comes from the first result shown in the cursor. The number of rows are determined by the “Max number chart/table data points” Preferences value (so long as there are enough results). The columns are built by traversing the top-level fields and sub-documents are represented by their JSON encoding.

In the result set table view, when I have documents with different fields, the column headers show the last document’s fields

This is a known issue and will be resolved in a future release; for now, click on the refresh button to refresh the column headers.

What color-coding is used in the Finder?

There are two colors you might see in the finder. A red (or red/orange) in various buttons means that this query may be slow because the collection does not have indexes that can support the query conditions. A yellow bagkground for the query string means that the query has been modified (e.g. by adding new conditions) but has not been run yet and thus the results shown in the Table View or the JSON data is for the previous query execution. If you execute the query the yellow will disappear.

Aggregation Builder

Can I drop collections with name like lmrm__agg_pipeline_ _127_0_0_1_60074?

When you use the Aggregation Builder and do not use $out stages (e.g. when you are not yet using MongoDB 2.6), the results of a run of a pipeline or stage are inserted into a transient collection called something like lmrm__agg_pipeline__127_0_0_1_60074.

These collections are in the Studio database and are managed by the Studio. Their name is composed by concatenating the client IP with the port - uniquely identifying that user session.

These collections are dropped when the user logs out or when the session is reclaimed due to a period of inactivity. There is also a TTL collection called lmrm__agg_pipeline_ttl that is used by the Studio to reclaim orphan collections in case a session was not reclaimed because the Studio was shutdown.

As long as the Studio is running these collections will be dropped. However, if no one is using these collections you can manually drop them at any point in time yourself.

CAUTION: While you can drop the agg__pipeline collections with the URI postfix, don’t drop other lmrm collections as they contain user data related to the Studio.

I get an error running a pipeline saying “fields stored in the db can’t start with a ‘$’

If you are using a pipeline wihtout an “out” step, then the aggregation results are stored into a transient collections. In that case aggregation results cannot have a field that starts with a “$”. Remove those fields using a projection stage or, if you are running on Mongo 2.6 and up, use an out stage. As an example, if you are aggregation system.profile then there are fields that have the following values:

"query": {
  "expireAfterSeconds": {
    "$exists": true
  }
}

Insertion of such a document will fail in the last phase of insertion just like the following fails in the shell:

> db.t1.insert({"query": { ...     "expireAfterSeconds": { ...      "$exists":
true ...     } ...   }}) Tue Jan 14 12:42:25.852 JavaScript execution failed:
field names cannot start with $ [$exists] at src/mongo/shell/collection.js:L147

Spreadsheet Bridge

What’s happening to my numeric data when I export to Excel and re-import?

If you export data to Excel and then reimport it, your numeric data (int, long etc) may not be exactly preserved as in the original document. This is due to the fact that Excel stores numbers natively as a float. When you export data to Excel int and long will be marked as such using cell formating. Upon re-import all numeric data with a format of no decimal places will be introduced as an int and all other numaric data as a float.

How do the sort and limit work in the Studio?

When you issue a sort (either in the Finder or in the Spreadsheet Bridge), the Studio will automatically add a limit clause. This is so that the query does not negatively impact the database. The Studio will check your preferences and look for the parameter controlling the limit on sorted results. This is a numeric parameter and the Studio will add a limit(<param>) clause to the end of your query. The default for this value is 1000 but you can change this. If you change this to -1 then the Studio will no longer put an automatic limit – do this only if you plan on sorting on an indexed field.

I get errors when importing large xslx files

If you use the streaming import method (for files over 2M), Excel sometimes stores strings in a way that is hard to process as a stream. The error that will come up will look like (with your data):

{ "false" : false} --- { "$set" : { "0.0" : 1.0 , "false" : false ,
"2.76758198E8" : 2.86852594E8 , "23.0" : 1927.0 , "-21600.0" : -18000.0 ,
"3.2792247411109478E17" : 3.2792229444389274E17 , "20.0" : 1788.0 ,
"9791.0" : 47605.0 , "123.0" : 9593.0 , "true" : true} ,
"$unset" : { "false" : ""}}

Usually you merely need to save the file in which case Excel will uncompat the strings, and try again.

When I navigate to the Spreadsheet Bridge from the Finder’s result set, the subquery and bind variable are expanded; does the Spreadsheet Bridge not support these?

The Spreadsheet Bridge supports both subqueries and bind variables. However, when you navigate from the result set those have apready been expanded. If you want the original query copy/paste the text from the additional text area into the Spreadsheet Bridge.

When I import data from a spreadsheet I get a cryptic error such as LEFT_SUBFIELD only supports Object: stats not: 6

When sharing data between MongoDB and Excel it is better to have the same structure for all documents or to export a subset that exists in all documents. Remember that the columns of a resulting spreadsheet exported through the Spreadsheet Bridge are determined from the first document. Consider what happens if you have two documents such as:

{ a: 1,
  b: {
    c: 2,
    d: 3
     }
}

{ a: 1,
  b: 4
}

In this case the spreadsheet will have columns for b.c and b.d because they are built from the first document. These columns will be empty for the second document. If you later try to re-sync data, you will get the error message since it will try to set or unset a value for b.c on something that is not an object (in the second document b is the number 4).

If all documents have the same structure or you export/import only the fields that are common no problme occurs. Also, if you choose to use a mapping that does not follow dot notation this will never happen.

Visualizer

I’m using a line graph and a logarithmic scale and see nothing

When using logarithmic scales you can’t have zero as a data point in this release. If you do have zeros the chart will come up blank when marked as a logarithmic scale. If you do have zeros in the data set use linear scales only at this point.

How can I use a guage visualization to plot different values in the same document vs. a gauge per document

Normally gauges are created one per document in the collection. Sometimes you have a single document of many metrics and what to plot that value as multiple gauges. To do so put a prefix next to your field names using a $project, for example:

{
  "lmrmValueCPU%": 0,
  "lmrmValueRES Mem (Gb)": 12.0,
  "lmrmValueVIRT Mem (Gb)": 13.0,
  "lmrmValueDisk Usage %": 59,
  "lmrmValueRunning Qs": 0,
  "lmrmValueWaiting Qs": 0,
  "lmrmValueCursors": 0,
  "lmrmValueConnections": 11.0,
  "lmrmValueDatabases": 25,
  "lmrmValueCollections": 547
}

Then built your gauge specifying fields name, value, min and max. Each document with the prefixes will be broken down to multiple fields in the resulting visualization and thus a different gauge.

Schema Analyzer

When building schema metadata from Finder or Schema Analyzer I get a “Too many fields for metadata calculation through JSON Studio (30000 limit reached). Try using sonarsample for this collection/database” error

Building schema metadata on-line using the Studio is limited to collections that have up to 30,000 distinct fields. After 30,000 fields the system stops. It will save the 30,000 fields but will not sample any additional documents. Use sonarsample.py to build full metadata for collections that have more than 30,000 distinct fields.

Differ

Nothing happens when I click the arrow buttons to open the Differ

Your popup-blocker may be blocking the new tab or windows from opening. Allow popups from the JSON Studio site/URL and try again.

Bind Variables

How do I add a bind variable with a drop down selection?

Drop-downs are constructed in the Schema Analyzer using a set of values that are the result of a “distinct” query - i.e. the values of a drop down are always the set of distinct values for an existing field (dot notation) in an existing collection. To add one, navigate to the Schema Analyzer, select the collection which includes the field you want to add the distinct values from (and build metadata if needed), and click on the “flag” icon to run the distinct query on the appropriate field. The distinct values will appear on the right-hand pane. Click on “$$” to add the bind variable. Then click on “Bind” to name the variable.

Gateway

I’m using curl to invoke a query through the Gateway and getting an error of an SSL certificate problem - why?

When using curl with the –ssl flag it tried to verify that the certificate used is a valid certificate. The certificate included in the installation package of JSON Studio has not been signed with any CA since it needs to be signed for a specific host. The error message you will see from curl looks like:

       curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
       error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
       More details here: http://curl.haxx.se/docs/sslcerts.html

       curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default bundle
file isn't adequate, you can specify an alternate file using the --cacert
option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL).
If you’d like to turn off curl’s verification of the certificate, use
the -k (or –insecure) option.

You have one of three options:

1. Run curl with the -k flag - as in curl –ssl -k https://host:8443/Gateway ... In this case curl will accept the certificate even though it cannot verify it.

2. Obtain a real certificate for your host and install it. This is a normal Tomcat procedure and unrelated to JSON Studio

3. Create a self-signed certificate, install it and make sure that curl recognizes the internal CA you use.

Most people use option 1.

Table Of Contents

Previous topic

Building a Recommendation Engine using R and JSON Studio using Data Stored in MongoDB

Copyright © 2013-2016 jSonar, Inc
MongoDB is a registered trademark of MongoDB Inc. Excel is a trademark of Microsoft Inc. JSON Studio is a registered trademark of jSonar Inc. All trademarks and service marks are the property of their respective owners.