next up previous contents
Next: Implementing Clients Up: Writing Value-Added Servers with Manager Previous: Operations on Storage Structures

Subsections
 

Implementing a Multi-Threaded Server

The capability to implement a multi-threaded server that manages multiple transactions is one of the distinguishing features of the SSM. Other persistent storage systems such as the Exodus Storage Manager (http://www.cs.wisc.edu/exodus/) ) only allow writing clients that run only one transaction at a time and are usually single-threaded. The grid example server is a multi-threaded threaded program that manages requests from multiple clients, and interactive commands through its terminal interface.

 

Error Codes

Most SSM methods return an error code object of type w_rc_t (usually typedef'ed as rc_t. It is important always to check the return values of these methods. To help find places where return codes are not checked, the w_rc_t destructor has extra code (when compiled with DEBUG defined) to verify that the error code was checked. An w_rc_t is considered checked when any of its methods that read/examine the error code are called, including the assignment operator. Therefore, simply returning an w_rc_t (which involves an assignment) is considered checking it. Of course, the newly assigned w_rc_t is considered unchecked. More details on error checking are available in the SSM interface document.

The macros W_DO and W_COERCE, declared in w_rc.h, are helpful in checking return values and keeping the code concise. The W_DO macro takes a function to call, calls it and checks the return code. If an error code is returned, the macro executes a return statement returning the error. The W_COERCE does the same thing except it exits the program if an error code is returned by the called function.

Many of the grid methods return w_rc_t codes as well. However, the RPC-related methods of command_server_t return error message strings. The conversion from w_rc_t to string is done by SSMDO macro found at the top of command_server.C.

 

Startup

 

Configuration Options

Several SSM configuration options must be set before the SSM is started with the ss_m constructor. In addition, most servers, including the grid server, will have options of their own that need to be set. The SSM provides an option facility options(common) for this purpose. Included with the option facility are functions to find options on the program command line and from files of configuration information.

In server.C, main creates a 3-level option group (levels will be discussed shortly), and adds the server's options to the group with a call to ss_m::setup_options. Once the option group is complete we call init_config_options in options.C to initialized the options' values.

The init_config_options function is used by both the client and server programs to initialize option values. The first thing it does is add classification level names for the option group. The option group used for the example has 3 levels. First level is the system the program belongs two, in this case grid. The second is the type of program, in this case server. The third is the filename of the program executable, which is also server. The classification levels allow options to be set for multiple programs with a single configuration file. For example, both the client and server programs have a connect_port option for specifying the port clients use to connect to the server. The following line in a configuration file sets the connection port so that client and server always agree:

grid.*.connect_port: 1234

The following line would be ignored by the grid programs as it is for the Shore VAS system:

shore.*.connect_port: 1234

After setting the level names, init_config_options reads the configuration file ./exampleconfig scanning for options. Then the command line is searched for any option settings so that command line settings override those in the configuration file. Any option settings on the command line are removed by changing argc and argv.

 

SSM Initialization

Once all of the configuration options have been set, the SSM can be started. The SSM is started by constructing an instance of the ss_m class (as is done in main in server.C).

One of the things the ss_m constructor does is perform recovery, if necessary. Recovery will be necessary if a previous server process crashed before successfully completing the ss_m destructor.

Once the SSM is constructed, main calls setup_device_and_volume to initialize the device and volume as described htmlrefabovessmvas:initializing. With the SSM constructed, we can now start the threads that do the real work.

 

Thread Management

The grid server manages multiple activities. It responds to input from the terminal, listens for new connections from clients, and processes RPCs from clients. Any one of these activities can become blocked while acquiring a lock or performing I/O, for example. By assigning activities to threads, the entire server process no longer blocks, only threads do.

The subsections below explain the three types of threads used by the grid server. The thread classes are declared in declared in rpc_thread.h and implemented in rpc_thread.C. Notice that each thread class is derived from smthread_t. All threads that use SSM facilities must derived from smthread_t rather than the base class, sthread_t.

The first code to be executed by any newly forked thread is its run method. The run method is virtual so that it can be specialized for each type of thread.

 

Listener Thread

Once the RPC facility has been initialized, main creates a new thread, type listener_t, that listens for client connections. The listener thread does two jobs:

The work of the listener thread is all done in its run method (as is true for all most threads). The first thing run does is create a file handler (sfile_read_hdl_t) for reading from the connection socket. The code then loops waiting for input to the socket. When a connection request arrives, the RPC function svc_getreqset is called allowing the RPC package to process the connection request. Then, a client_t thread (discussed in the next section) is created to handle the connection. The new client thread is added to the listener's list of clients. Notice that since the client list may be accessed by multiple threads, it is protected by a mutex.

When the server is ready to shutdown, main calls listener_t::shutdown, which in turn calls shutdown on the file handler for the connection socket. This causes the listener thread to wakeup and break out of the while loop in the run method. The listener then notifies the cleaner thread (see below) to destroy defunct threads.

 

Client Threads

When the listener thread detects a new connection it forks a new thread, type client_t, to process RPC requests on the connection. The client_t constructor is given a socket on which to wait for requests and a pointer to the listener thread to notify when it is finished. Notice that the client thread has a buffer area for generating RPC replies, called reply_buf.

The client_t::run method begins by creating a a file handler (sfile_read_hdl_t) for reading from the socket where requests will arrive. Next, a command_server_t object is created to process the requests.

The code then loops waiting for input on the socket. When an RPC request arrives the RPC function svc_getreqset is called which in turn dispatches the RPC to the proper RPC stub function (implemented in command_server.C). When the connection is broken, the loop is exited and the file handler and command_server are destroyed. Then listener_t::child_is_done is called to notify the listener that the client thread is finally finished.

 

Cleaner Thread

The cleaner thread waits on a condition variable and when awoken, checks for defunct threads in the list of client threads. Any defunct threads found are removed and destroyed.

Normally it wakes up when a client thread finishes its client_t::run method, checks the list, and the waits again on the condition variable cleanup. When the listener thread ends, it causes the cleaner thread to destroy itself.

 

Terminal Input Thread

The main program simply starts a main thread after processing options. The main thread then takes over the work of the server. After starting the listener thread, main creates another thread, of type stdin_thread_t, which processes commands from standard input.

The work of the standard input thread is all done in its run method (as is true for all most threads). The first thing run does is create a file handler (sfile_read_hdl_t) for reading from the file descriptor for standard input. Next, a command_server_t object is created to process the commands. The code then loops waiting for input. When input is ready, a line is read and fed to command_server_t::parse_command for processing. If parse_command indicates that the quit command has been entered or if EOF is reached on standard input, the input loop is exited and the thread ends.

 

Transaction and Lock Management

The grid server uses a simple transaction management scheme. All operations on data managed by the SSM must be done within the scope of a transaction. Each client thread starts a transaction for the client it manages. Clients decide when to end (either commit or abort) the transaction. When this occurs a new one is automatically started by the grid server. If a client disconnects from the server, its current transaction is automatically aborted.

The SSM automatically acquires locks when data is accessed, providing serializable transactions. The grid server relies on the automatic locking done by the SSM. One example of where the server explicitly acquires locks is in the grid_t::clear method, which removes every item from the database. Here we acquire an EX lock on the item file and indices to avoid the overhead of acquiring finer granularity locks.

More sophisticated transaction and locking schemes are possible. For example, the grid_t::generate_display method (used by the print command) locks the entire file containing items, thus preventing changes to the grid. For greater concurrency, it is possible to have generate_display start a separate transaction before scanning the item file. Afterward, it can commit the transaction, releasing the locks on the file. To do this the client uses the smthread_t::attach method to attach to the original client transaction.

Another way to get a similar effect is to use the t_cc_none flag to the concurrency control (cc) parameter of the scan_file_i constructor.

 

RPC Implementation

At the heart of the grid system are the RPCs called by the client and serviced at the server. We use the publicly available Sun RPC package to implement the RPCs.

 

Declarations

The RPCs are declared in msg.x. This includes grid_basics.h which contains some additional declarations used throughout the grid code. The first part of msg.x contains declarations for structures use to hold RPC arguments and return values, followed by a listing of the RPCs. The final part of the file contains ANSI-C style function prototypes for the server and client side RPC stubs since the RPC package does not generate them.

The msg.x file is processed by the rpcgen (see rpcgen(1) manual page) utility to create the following files:

 

C++ Wrappers

The output of rpcgen is inconvenient for two reasons: it is is C not C++ and the client stubs take different parameters than those of the server. Therefore, we encapsulate the RPCs in the abstract base class command_base_t declared in command.h. The pure virtual functions in this class represent RPCs. Class command_client_t (in command_client.h) is derived from command_base_t and implements the client side of the RPCs by calling the C routines in msg_clnt.c. Also derived from command_base_t is command_server_t (in command_server.h) that implements the server side of the RPCs.

The server-side C stubs for the RPCs, implemented in server_stubs.C, call corresponding command_server_t methods.

The only function that makes RPC requests is command_base_t::parse_command. It parses a command line and calls the appropriate command_base_t method implementing the RPC.

To process RPC requests on the server, an instance of command_server_t is created for each client thread. When an RPC arrives, the thread managing the client is awakened and the RPC dispatch function in msg_svc.c is called. This calls the server-side C stub which in turn calls the corresponding command_server_t method. The methods in command_server_t call grid_t methods (in grid.C) to access and update the grid database.

To execute commands on the server, an instance of command_server_t is created for the thread managing standard input. This thread calls command_base_t::parse_command for each line of input. The parse parse_command method calls the command_server_t methods, short-circuiting the RPC facility.

 

RPC Startup

Once the SSM and volumes are initialized, the grid server is ready to start the RPC service and begin listening for connections from clients. RPC start-up is done by the function start_tcp_rpc in server.C. This function creates the socket used to listen for connection requests, binds a port to the socket, and then calls RPC facility's initialization functions.

 

Multi-threading Issues

The multi-threaded environment of the server requires changes to a couple common practices in Sun RPC.

Replies are usually placed in a statically allocated structure. With multiple threads, each threads needs its own space for replies, so a reply area is created for each thread as described above.

The RPC package allocates (mallocs) space for pointer arguments in RPCs. The convention is that the function processing a request frees the space from the previous request of the same type. Because the convention requires that the reply be saved in static storage, this does not work in a multi-threaded environment. The Sun RPC package shipped with the Shore release has modified rpcgen to generate a dispatch routine that automatically frees the space after the reply is sent, relieving the function of the burden of freeing the space. Because of this change, the library does not lend itself to saving replies for the purpose of retransmitting them in response to duplicate requests (for the UDP service).

 

Steps to add a New RPC

As an example. of how to add an RPC we explain how the locate command was added to the grid example.

 

Shutdown

Shutting down the SSM involves ending all threads, except the one running main and then destroying the ss_m object. After main starts the thread for commands on the terminal, main calls wait on the thread. When the quit command is entered, the terminal thread ends causing wait to return, thus waking up the main thread. Main then tells the listener listener thread to shutdown and does a wait for it. The shutdown process for the listener thread is described above. The main thread wakes up when the listener thread is done. The final shutdown step is to delete the instance of the class ss_m created at the beginning of main.


next up previous contents
Next: Implementing Clients Up: Writing Value-Added Servers with Manager Previous: Operations on Storage Structures
This page was generated from LaTeX sources
10/27/1997