Congestion
Being a collaborative authoring tool for web development, GWD could face
congestion as the organization grows and the quantity of users and traffic
increases. With the growth of an organization, the proliferation of authors
and documents increases and the number of document versions increases
as well. Thus, there is increased contention for the hosts' computational
resources and the network's communications resources. The various points
of congestion are:
1. Access Control / Permission (see feature)
As the number of authors increases, there is going to be an increased
verification requirement for users and their access to various documents.
Since the author could be idle while awaiting verification, the completion
time of this task is of great importance. There are two ways to circumvent
this problem:
(i) Have a single verification server and queue all authentication
requests in front of it. Though this increases the server utilization,
at some point the sojourn time of the authors becomes too high and this
new problem would need to be addressed.
(ii) Keep documents associated with different websites or different
subsections on different servers and have these servers authenticate
the appropriate authors--thus introducing parallelism to increase the
throughput by lowering waiting times.
2. Real Time Collaboration (see feature)
This is a very resource-intensive feature and the user experience is
highly sensitive to latency. When too many people lock up the resources
on the host(s) handling collaboration processes, the communication links
can get highly congested and servers' work load increases.
This problem can be solved by splitting this feature further into
smaller functional modules based on the needs of the particular environment.
By assigning these sub-modules to different hosts, it is possible to
distribute a heavy workload of real time collaboration across several
hosts.
3. Content-based Link Suggestion (see feature)
This is another feature that is prone to congestion as the web site
of an organization grows. It becomes computationally difficult to compare
all the keywords of each document to those of other documents and to
concurrently suggest links to an increasing numbers of authors.
One way to overcome this problem is to partition this process to several
hosts based on the information content or the topic. This will make
tasks more manageable and help reduce congestion while realizing scalability.
4. Increase in the Number of Documents and Users
As the number of documents and the number of versions increases, there
might be congestion in the storage and retrieval of stockpiled versions.
A possible solution to overcome this problem is to store the older
versions that have a very low probability of being retrieved on tertiary
storage devices. This will not affect scalability as these documents
will seldom requested.
5. Reconciliation Processing (see feature)
Reconciliation processing is also an area highly prone to congestion.
Scalability is greatly dependent on the number of documents that the
logic can simultaneously reconcile.
Congestion can be greatly reduced by multiplexing and queuing. Instead
of having authors who have been working on a document concurrently wait
while their work is in a queue for reconciliation, the application will
notify them of their request being accepted (i.e. "the document
is being reconciled") and they will be notified when the task is
complete.
Network-Level Solutions
In addition to the host-level solutions listed above, congestion can
be reduced at the network level. One way is to associate queues with communication
links and prioritize tasks based on the form of communication--immediate
vs. deferred. Also, the network should provide congestion information
to hosts that could help with flow control.
[Milestone 6 Top]
last updated 12/04/98
|