Tutorial: Performing Common Tasks with gfsh

This topic takes you through a typical sequence of tasks that you execute after starting gfsh.

If you haven't already, start a gfsh prompt. See Starting gfsh for details.
Step 1: Start up a locator. Enter the following command:
gfsh>start locator --name=locator1
The following output appears:
gfsh>start locator --name=locator1
Starting a Locator in /home/stymon/locator1 on pickle[10334] as locator1...
........
Locator in /home/stymon/locator1 on pickle[10334] as locator1 is currently online.
Process ID: 3464
Uptime: 4 seconds
GemFire Version: 7.0.1
Java Version: 1.6.0_38
Log File: /home/stymon/locator1/locator1.log
JVM Arguments: -Dgemfire.launcher.registerSignalHandlers=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /home/stymon/Pivotal_GemFire_701_b39782/lib/gemfire.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/antlr.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/gfsh-dependencies.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/pulse-dependencies.jar:.:/home/stymon/Pivotal_GemFire_701_b40090/lib/gemfire.jar:/home/stymon/Pivotal_GemFire_701_b40090/lib/antlr.jar:/home/stymon/Pivotal_GemFire_701_b40090/lib/gfSecurityImpl.jar:/home/stymon/Pivotal_GemFire_701_b40090/lib/jackson-core-asl-1.9.9.jar:/home/stymon/Pivotal_GemFire_701_b40090/lib/commons-logging.jar:/home/stymon/Pivotal_GemFire_701_b40090/lib/tomcat-embed-core.jar:/home/stymon/Pivotal_GemFire_701_b40090/lib/tomcat-embed-logging-juli.jar:/home/stymon/Pivotal_GemFire_701_b40090/lib/tomcat-embed-jasper.jar:/home/stymon/Pivotal_GemFire_701_b40090/SampleCode/tutorial/classes:/home/stymon/Pivotal_GemFire_701_b40090/SampleCode/helloworld/classes:/home/stymon/Pivotal_GemFire_701_b40090/SampleCode/quickstart/classes:/home/stymon/Pivotal_GemFire_701_b40090/SampleCode/examples/dist/classes:/usr/java/jdk1.6.0_38/jre/../lib/tools.jar

Successfully connected to: [host=pickle, port=1099]
In your file system, examine the folder location where you executed gfsh. Notice that the start locator command has automatically created a working directory (using the name of the locator), and within that working directory, it has created a log file, a status file, and a .pid (containing the locator's process ID) for this locator.

In addition, because no other JMX Manager exists yet, notice that gfsh has automatically started an embedded JMX Manager on port 1099 within the locator and has connected you to that JMX Manager.

Step 2: Examine the existing gfsh connection.

In the current shell, type the following command:
gfsh>describe connection
If you are connected to the JMX Manager started within the locator that you started in Step 1, the following output appears:
Connection Endpoints
--------------------
pickle[1099]
Notice that the JMX Manager is on 1099 whereas the locator was assigned the default port of 10334.

Step 3: Connect to the same locator/JMX Manager from a different terminal.

This step shows you how to connect to a locator/JMX Manager. Open a second terminal window, and start a second gfsh prompt. Type the same command as you did in Step 2 in the second prompt:
gfsh>describe connection
This time, notice that you are not connected to a JMX Manager, and the following output appears:
gfsh>describe connection
Connection Endpoints
--------------------
Not connected
Type the following command in the second gfsh terminal:
gfsh>connect
The command will connect you to the currently running local locator that you started in Step 1.
gfsh>connect
Connecting to Locator at [host=pickle, port=10334] ..
Connecting to Manager at [host=pickle, port=1099] ..
Successfully connected to: [host=pickle, port=1099]
Note that if you had used a custom --port when starting your locator, or you were connecting from the gfsh prompt on another member, you would also need to specify --locator=host[port] when connecting to the distributed system. For example (type disconnect first if you want to try this next command):
gfsh>connect --locator=pickle[10334]
Connecting to Locator at [host=pickle, port=10334] ..
Connecting to Manager at [host=pickle, port=1099] ..
Successfully connected to: [host=pickle, port=1099]
Another way to connect your gfsh prompt to the distributed system would be to connect to directly to the JMX Manager running inside the locator. For example (type disconnect first if you want to try this next command):
gfsh>connect --jmx-manager=pickle[1099]
Connecting to Manager at [host=pickle, port=1099] ..
Successfully connected to: [host=pickle, port=1099]
Step 4: Disconnect and close the second terminal window. Type the following commands to disconnect and exit the second gfsh prompt:
gfsh>disconnect
Disconnecting from: GemFireStymon[1099]
Disconnected from : GemFireStymon[1099]
gfsh>exit
Close the second terminal window.
Step 5: Start up a server. Return to your first terminal window, and start up a cache server that uses the locator you started in Step 1. Type the following command:
gfsh>start server --name=server1 --locator=pickle[10334]
If the server starts successfully, the following output appears:
gfsh>start server --name=server1 --locator=pickle[10334]
Starting a Cache server in /home/stymon/server1 on pickle[40404] as server1...
......
Server in /home/stymon/server1 on pickle[40404] as server1 is currently online.
Process ID: 3813
Uptime: 3 seconds
GemFire Version: 7.0.1
Java Version: 1.6.0_38
Log File: /home/stymon/server1/server1.log
JVM Arguments: -Dgemfire.default.locators=192.168.129.144[10334] -XX:OnOutOfMemoryError="kill -9 %p" -Dgemfire.launcher.registerSignalHandlers=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /home/stymon/Pivotal_GemFire_701_b39782/lib/gemfire.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/antlr.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/gfsh-dependencies.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/pulse-dependencies.jar:.:/home/stymon/Pivotal_GemFire_701_b39782/lib/gemfire.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/antlr.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/gfSecurityImpl.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/jackson-core-asl-1.9.9.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/commons-logging.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/tomcat-embed-core.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/tomcat-embed-logging-juli.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/tomcat-embed-jasper.jar:/home/stymon/Pivotal_GemFire_701_b39782/SampleCode/tutorial/classes:/home/stymon/Pivotal_GemFire_701_b39782/SampleCode/helloworld/classes:/home/stymon/Pivotal_GemFire_701_b39782/SampleCode/quickstart/classes:/home/stymon/Pivotal_GemFire_701_b39782/SampleCode/examples/dist/classes:/usr/java/jdk1.6.0_38/jre/../lib/tools.jar
In your file system, examine the folder location where you executed gfsh. Notice that just like the start locator command, the start server command has automatically created a working directory (named after the server), and within that working directory, it has created a log file and a .pid (containing the server's process ID) for this cache server. In addition, it has also written files that are used for licensing.
Step 6: List members. Use the list members command to view the current members of the GemFire distributed system you have just created.
gfsh>list members
  Name   | Id
-------- | ----------------------------------------
server1  | pickle(server1:3813)<v16>:3789
locator1 | pickle(locator1:3464:locator)<v13>:22473
Step 7: View member details by executing the describe member command.
gfsh>describe member --name=server1
Name        : server1
Id          : pickle(server1:3813)<v16>:3789
Host        : pickle
Regions     : 
PID         : 3813
Groups      : 
Used Heap   : 9M
Max Heap    : 485M
Working Dir : /home/stymon/server1
Log file    : /home/stymon/server1/server1.log
Locators    : 192.168.129.144[10334]

Cache Server Information
Server Bind              : null
Server Port              : 40404
Running                  : true
Client Connections       : 0
Note that no regions have been assigned to this member yet.
Step 8: Create your first region. Type the following command followed by the tab key:
gfsh>create region --name=region1 --type=
A list of possible region types appears, followed by the partial command you entered:
gfsh>create region --name=region1 --type=

PARTITION
PARTITION_REDUNDANT
PARTITION_PERSISTENT
PARTITION_REDUNDANT_PERSISTENT
PARTITION_OVERFLOW
PARTITION_REDUNDANT_OVERFLOW
PARTITION_PERSISTENT_OVERFLOW
PARTITION_REDUNDANT_PERSISTENT_OVERFLOW
PARTITION_HEAP_LRU
PARTITION_REDUNDANT_HEAP_LRU
REPLICATE
REPLICATE_PERSISTENT
REPLICATE_OVERFLOW
REPLICATE_PERSISTENT_OVERFLOW
REPLICATE_HEAP_LRU
LOCAL
LOCAL_PERSISTENT
LOCAL_HEAP_LRU
LOCAL_OVERFLOW
LOCAL_PERSISTENT_OVERFLOW
PARTITION_PROXY
PARTITION_PROXY_REDUNDANT
REPLICATE_PROXY

gfsh>create region --name=region1 --type=
Complete the command with the type of region you want to create. For example, create a local region:
gfsh>create region --name=region1 --type=LOCAL
Member  | Status
------- | --------------------------------------
server1 | Region "/region1" created on "server1"
Because only one server is in the distributed system at the moment, the command creates the local region on server1.
Step 9: Start another server. This time specify a --server-port argument with a different server port because you are starting a cache server process on the same host machine.
gfsh>start server --name=server2 --server-port=40405
Starting a Cache server in /home/stymon/server2 on pickle[40405] as server2...
......
Server in /home/stymon/server2 on pickle[40405] as server2 is currently online.
Process ID: 3967
Uptime: 3 seconds
GemFire Version: 7.0.1
Java Version: 1.6.0_38
Log File: /home/stymon/server2/server2.log
JVM Arguments: -Dgemfire.default.locators=192.168.129.144[10334] -XX:OnOutOfMemoryError="kill -9 %p" -Dgemfire.launcher.registerSignalHandlers=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /home/stymon/Pivotal_GemFire_701_b39782/lib/gemfire.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/antlr.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/gfsh-dependencies.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/pulse-dependencies.jar:.:/home/stymon/Pivotal_GemFire_701_b39782/lib/gemfire.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/antlr.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/gfSecurityImpl.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/jackson-core-asl-1.9.9.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/commons-logging.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/tomcat-embed-core.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/tomcat-embed-logging-juli.jar:/home/stymon/Pivotal_GemFire_701_b39782/lib/tomcat-embed-jasper.jar:/home/stymon/Pivotal_GemFire_701_b39782/SampleCode/tutorial/classes:/home/stymon/Pivotal_GemFire_701_b39782/SampleCode/helloworld/classes:/home/stymon/Pivotal_GemFire_701_b39782/SampleCode/quickstart/classes:/home/stymon/Pivotal_GemFire_701_b39782/SampleCode/examples/dist/classes:/usr/java/jdk1.6.0_38/jre/../lib/tools.jar

Step 10: Create a replicated region.

gfsh>create region --name=region2 --type=REPLICATE
Member  | Status
------- | --------------------------------------
server1 | Region "/region2" created on "server1"
server2 | Region "/region2" created on "server2"
Step 11: Create a partitioned region.
gfsh>create region --name=region3 --type=PARTITION
Member  | Status
------- | --------------------------------------
server1 | Region "/region3" created on "server1"
server2 | Region "/region3" created on "server2"
Step 12: Create a replicated, persistent region.
gfsh>create region --name=region4 --type=REPLICATE_PERSISTENT
Member  | Status
------- | --------------------------------------
server1 | Region "/region4" created on "server1"
server2 | Region "/region4" created on "server2"
Step 13: List regions. A list of all the regions you just created displays.
gfsh>list regions
List of regions
---------------
region1
region2
region3
region4
Step 14: View member details again by executing the describe member command.
gfsh>describe member --name=server1
Name        : server1
Id          : pickle(server1:3813)<v16>:3789
Host        : pickle
Regions     : region4
              region3
              region2
              region1
PID         : 3813
Groups      : 
Used Heap   : 12M
Max Heap    : 485M
Working Dir : /home/stymon/server1
Log file    : /home/stymon/server1/server1.log
Locators    : 192.168.129.144[10334]

Cache Server Information
Server Bind              : null
Server Port              : 40404
Running                  : true
Client Connections       : 0
Notice that all the regions that you created now appear in the "Regions" section of the member description.
gfsh>describe member --name=server2
Name        : server2
Id          : pickle(server2:3967)<v17>:19343
Host        : pickle
Regions     : region4
              region3
              region2
PID         : 3967
Groups      : 
Used Heap   : 4M
Max Heap    : 485M
Working Dir : /home/stymon/server2
Log file    : /home/stymon/server2/server2.log
Locators    : 192.168.129.144[10334]

Cache Server Information
Server Bind              : null
Server Port              : 40405
Running                  : true
Client Connections       : 0
Notice that the second server only has region2, region3, and region4 on it.
Step 14: Put data in a local region. Enter the following put command:
gfsh>put --key=('123') --value=('ABC') --region=region1
Result      : true
Key Class   : java.lang.String
Key         : ('123')
Value Class : java.lang.String
Old Value   : <NULL>
Step 15: Put data in a replicated region. Enter the following put command:
gfsh>put --key=('123abc') --value=('Hello World!!') --region=region2
Result      : true
Key Class   : java.lang.String
Key         : ('123abc')
Value Class : java.lang.String
Old Value   : <NULL>

Step 16: Retrieve data. You can use locate entry, query or get to return the data you just put into the region.

For example, using the get command:
gfsh>get --key=('123') --region=region1
Result      : true
Key Class   : java.lang.String
Key         : ('123')
Value Class : java.lang.String
Value       : ABC
For example, using the locate entry command:
gfsh>locate entry --key=('123abc') --region=region2
Result          : true
Key Class       : java.lang.String
Key             : ('123abc')
Locations Found : 2


MemberName | MemberId
---------- | -------------------------------
server2    | pickle(server2:3967)<v17>:19343
server1    | pickle(server1:3813)<v16>:3789
Notice that because the entry was put into a replicated region the entry is located on both distributed system members.
For example, using the query command:
gfsh>query --query='SELECT * FROM /region2'

Result     : true
startCount : 0
endCount   : 20
Rows       : 1

Result
-----------------
('Hello World!!')

NEXT_STEP_NAME : END
Step 17: Export your configurations and data. When you exit the gfsh shell, you lose any locator and server configurations and any data that you have created within the shell. To save your configuration and data, you can use the export config and export data commands to save your settings as files that you can later copy the configuration files into the working directories of other members. For example:
gfsh>export config
Downloading Cache XML file: /home/stymon/./server1-cache.xml
Downloading properties file: /home/stymon/./server1-gf.properties
Downloading Cache XML file: /home/stymon/./server2-cache.xml
Downloading properties file: /home/stymon/./server2-gf.properties
Then in a regular terminal, you can simply use your operating system's copy command to put copies of those configuration files into the working directories of the members. For example:
prompt# cp server1-gf.properties server1/gemfire.properties
prompt# cp server1-cache.xml server1/cache.xml
prompt# cp server2-gf.properties server2/gemfire.properties
prompt# cp server2-cache.xml server2/cache.xml
The next time you start the servers, you only need to specify their name. The --server-port specification and regions that you defined in these previous steps will already be created for you.
To export server region data, use the export data command. For example:
gfsh>export data --region=region1 --file=region1.gfd --member=server1
You can later use the import data command to import that data into the same region on another member.