Shore Project Plans
The Shore Project Group
Computer Sciences Department
Thu Nov 3 14:13:18 CST 1994
This document describes what,
in the overall plan for Shore,
is yet to be done,
and it describes our plans for finishing the
Shore Value-Added Server (SVAS) RPC does not handle shipping objects that are larger than
the amount of shared memory allocated for the client-server connection.
(Presently prints a message; need to remove this restriction.)
du/df information for indexes needs to be checked, corrected
if necessary, and printed by the du/df commands.
The figure for alignment in non-index pages is wrong (it doesn't
take into account alignment bytes within a record). This
needs to be corrected.
For the alpha release, some of these functions result in an
error message or a warning message.
Persistent mounts are not entirely debugged.
There might still be assertions and assumptions in the
SVAS about the targets of links being on the same volume
as the directory containing the link.
Persistent mounts are not supported on the client side either.
For the Alpha release, the persistent mount operation
returns an error message (not implemented).
One cannot destroy a file system (volume). That will be fixed.
"Error not checked" still appears in places.
The SVAS lookup function doesn't behave the same
on the client and server sides.
SVAS getDirEntries returns BadCookie (-2) when it reaches
the end of the directory.
It should return NoSuchCookie.
Errors and logging:
The Shore Value-Added Server needs options to initialize
the logging levels for the various services.
Error logging for all services needs to be
sent to the services' logs, and nothing to the screen.
The NFS server needs to pass back small integers
for the fsid and fileid,
and it needs a transient map for serial numbers to
need to be revamped.
We need to reduce the quantity of error codes that the application
needs to check, and make sure the functions are consistent
about returning similar errors in similar conditions.
We also need to ensure that the error codes get converted to the
proper Unix errors by the NFS server.
The entire error return (w_rc_t) list needs to be sent back
in the SVAS RPC reply (at least, in the debugging case, it does)
and reconstructed on the client side.
SVAS internal functions will return a w_rc_t.
Simple functions for processing options (for the application)
need to process the command line.
The SVAS, Storage Manager (SM), and the SVAS shell need functions to
destroy off-volume references.
The object-destroy function need to clean up references
We need some fsck()-like utilities to check the
integrity of the file system.
We also need programs to
remove off-volume references that are no longer needed.
A list of persistent mounts onto a volume will be stashed in the
volume's root index so that
it is possible to identify all mounts (for fsck() purposes)
without scanning the entire file system.
The SVAS shell needs administrative commands
for patching structures found to be erroneous
by fsck(). (This is necessary because someone could
reformat a volume that's linked into the name space,
without first unlinking (dismounting) it.)
flatten the du_infostructure and clean up all the code
that uses it.
SVAS, SM, and SVAS shell need a functions to
list the mounted (served) devices.
SVAS shell needs a command to tell if suppression of
error messages is on or off.
SVAS will use savepoints rather than transaction-abort
whenever it can (when errors are encountered).
We plan to offer documentation for all layers of the Shore software.
The SSM log will work on a raw device.
Release on HP-UX with HP CC.
Fix libraries to include ptrepository.
Finish configuration variations for SVAS: make the SVAS
a library, put main.C in a different directory to make
the server. Make it easier to configure different
combinations of services (for someone writing another
Shutting down and dismounting volumes:
SM and SVAS will offer two forms of shutdown:
wait for transactions to end, and aborting all transactions.
In either case,
it will be possible to shut down the server and bring it back up.
Dismounts will be disallowed, except in shut-down state.
Formatting will not be allowed on devices while they are being served.
This will require the SM to "fstat" the device so that mounted devices
can be identified by inode or dev no.
SVAS: object cache needs a version of mkAnonymous with uninitialized
data (minor performance hack).
file pages are locked when records are created; this needs to be changed.
statistics exist at all layers; need to be coordinated and we
need a clean way to gather them for an application.
SSM will allow files to be opened for scan-with-destroy
so that a file's objects can be destroyed during a scan.
We might also allow scan-with-updates but prohibit any
updates that would result in an object being moved.
SSM needs a next-page interface so that scanning files
can be done in the client.
(Now it's done entirely on the server, despite the fact
that some pages are cached in the client.)
SVAS and object cache need a protocol to batch
The queued share-memory buffers used in EXODUS will
go into Shore; this should speed up
the client-server communication.
SSM will make files and indexes look like records inasmuch
as they will have a "header" for meta-data.
Object cache needs a stat() method for the application to call
on registered objects.
We need to write a generic mount (for platforms other than Sparcs.)
The size of system properties stored by the SVAS will be reduced.
SSM will offer a new logging variant for files : log only the meta-data.
SSM and SVAS will support more than one volume on a device.
If an application reads any portion of a large object, the entire objects
is read from the server. If the application writes any portion of the
object, then the entire object is written back to the server. Fixing this,
in conjunction with incremental removal of objects from the object cache
should improve the performance of large objects significantly.
Performance and flexibility:
The object cache will be able to replace objects.
File system implementation:
We plan to reconsider the file system implementation.
We are considering modifying the concept of
a hard link somewhat, in order
to improve unix compatibility (with respect to cross-references
and NFS), and to make it possible to generate a path to an
object, given its OID.
Transparent data distribution:
We plan to support the
distribution of data
among cooperating peer servers.
We will use a two-phase commit protocol to resolve
transactions that use distributed data.
The servers need not be on the same architectures.
we plan to build authentication
and some security
into the system.
Cache index data in the application process.
Method dispatch will be addressed.
Indexable attributes and automatic indexes will be supported.
Performance and administrative support:
Disk management and page allocation will be improved.
Support for transient objects.
we plan to implement another language binding,
possibly for Smalltalk.
Objects that are destroyed leave
dangling pointers if the destroyed object
participates in bidirectional relationships.
An object cache and part of the type system will
be added to the server to support type integrity in the
event that objects are destroyed through NFS,
and the C++ language binding will also maintain the
integrity of bidirectional relationships.
With extensions to the SVAS to support Unix open-file semantics,
we will build a Unix link-level compatibility library for
Make Shore POSIX-compatible (the service it provides
will be POSIX-compatible,
and it will build and run on a POSIX machine).
Inter-transaction caching (client-side):
We plan to explore ways to avoid flushing a client process' object
cache when a transaction commits.
Exceptions would be a useful and natural way of reporting errors to
application programs. Currently, C++ exceptions are not viable, but as
soon as they are, we would like to explore their use in simplifying error
C++ Standard Templates:
We will take a close look at the C++ standard templates library
to see if it would be useful to adopt the interfaces described there for SDL.