Mca Distributed System Question Papers of Fifith Semester-RGPV
Hi,guyz this are the solved paper of MCA fifth semester,RGPV Bhopal
When the message arrives at the server ,the stub examine the message to see which procedure is needed and makes the appropriate call . if the server also supports other remote procedure the server stub might have a switch statement in it to select the procedure to be called ,depending upon the first field of the message the actual call form stub to the server looks much like the original client call , expect that the parameters are variable initialized from the coming
Message-oriented communication is a way of communicating between processes. Messages, which correspond to events, are the basic units of data delivered. Tanenbaum and Steen classified message-oriented communication according to two factors---synchronous or asynchronous communication, and transient or persistent communication. In synchronous communication, the sender blocks waiting for the receiver to engage in the exchange. Asynchronous communication does not require both the sender and the receiver to execute simultaneously. So, the sender and recipient are loosely-coupled. The amount of time messages are stored determines whether the communication is transient or persistent. Transient communication stores the message only while both partners in the communication are executing. If the next router or receiver is not available, then the message is discarded. Persistent communication, on the other hand, stores the message until the recipient receives it.
A typical example of asynchronous persistent communication is Message-Oriented Middleware (MOM). Message-oriented middleware is also called a message-queuing system, a message framework, or just a messaging system. MOM can form an important middleware layer for enterprise applications on the Internet. In the publish and subscribe model, a client can register as a publisher or a subscriber of messages. Messages are delivered only to the relevant destinations and only once, with various communication methods including one-to-many or many-to-many communication. The data source and destination can be decoupled under such a model.
The Java Message Service (JMS) from Sun Microsystems provides a common interface for Java applications to MOM implementations. Since JMS was integrated with the recent version of the Java 2 Enterprise Edition (J2EE) platform, Enterprise Java Beans (EJB)---the component architecture of J2EE---has a new type of bean, the message-driven bean. The JMS integration simplifies the enterprise development, allowing a decoupling between components.
{ii} Specifying QOS -Delivering quality of service (QOS) guarantees in distributed systems is fundamentally an end-to-end issue, that is, from application-to-application. Consider, for example, the remote access and distribution of audio and video content from a web server: in the distributed system platform, quality of service assurances should apply to the complete flow of information from the remote server across the network to the point of delivery and play out
Generally, this requires end-to-end admission testing and resource reservation in the first instance, followed by careful co-ordination of disk and thread scheduling and flow control in the end-systems, packet/cell scheduling and congestion control in the network and, finally, active end-to-end monitoring and maintenance of the delivered quality of service.
If A –> B is true, then it must also be true that LCA < LCB. However, this is not the case. Just because LCA < LCB does not mean that A –> B. Therefore, we can say that we cannot infer a casual ordering of processors just by looking at their timestamps.
Ans {i} Strict consistency - Strict consistency in computer science is the most stringent consistency model.
It says that a read operation has to return the result of the latest write operation which occurred on that data item. This is only possible when a global clock exists. Since it's impossible to implement a global clock across nodes of a distributed system, this model has traditionally only been possible on a uniprocessor.
{ii} causal consistency - Causal consistency is one of the consistency models used in the domain of the concurrent programming (e.g. in distributed shared memory, distributed transactions etc).
A system provides causal consistency if memory operations that potentially are causally related are seen by every node of the system in the same order. Concurrent writes (i.e. ones that are not causally related) may be seen in different order by different nodes. This is weaker than sequential consistency, which requires that all nodes see all writes in the same order, but is stronger than PRAM consistency, which requires only writes done by a single node to be seen in the same order from every other node.
When a node performs a read followed later by a write, even on a different variable, the first operations is said to be causally ordered before the second, because the value stored by the write may have been dependent upon the result of the read. Similarly, a read operation is causally ordered after the earlier write on the same variable that stored the data retrieved by the read. Also, even two write operations performed by the same node are defined to be causally ordered, in the order they were performed. Intuitively, after writing value v into variable x, a node knows that a read of x would give v, so a later write could be said to be (potentially) causally related to the earlier one. Finally, we force this causal order to be transitive: that is, we say that if operation A is (causally) ordered before B, and B is ordered before C, A is ordered before C.
Operations that are not causally related, even through other operations, are said to be concurrent.
{iii} Weak consistency - The name weak consistency may be used in two senses In the first sense, strict and more popular, the weak consistency is one of the consistency models used in the domain of the concurrent programming (e.g. in distributed shared memory, distributed transactions etc.).
The protocol is said to support weak consistency if:
symmetric key - Symmetric-key algorithms are a class of algorithms for cryptography that use trivially related, often identical, cryptographic keys for both decryption and encryption.
The encryption key is trivially related to the decryption key, in that they may be identical or there is a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link.
Other terms for symmetric-key encryption are secret-key, single-key, shared-key, one-key, and private-key encryption. Use of the last and first terms can create ambiguity with similar terminology used in public-key cryptography.
Types of symmetric-key algorithms
Symmetric-key algorithms can be divided into stream ciphers and block ciphers. Stream ciphers encrypt the bytes of the message one at a time, and block ciphers take a number of bytes and encrypt them as a single unit. Blocks of 64 bits have been commonly used; the Advanced Encryption Standard algorithm approved by NIST in December 2001 uses 128-bit blocks.
Some examples of popular and well-respected symmetric algorithms include Twofish, Serpent, AES (Rijndael), Blowfish, CAST5, RC4, TDES, and IDEA.
Authentication in KERBEROS -You may not know it, but your network is probably unsecured right now. Anyone with the right tools could capture, manipulate, and add data between the connections you maintain with the internet. The security cat and mouse game isn’t one sided, however. Network administrators are currently taking advantage of Kerberos to help combat security concerns.
Project Athena
Project Athena was initiated in 1983, when it was decided by the Massachusetts Institute of Technology that security in the TCP/IP model just wasn’t good enough. A total of 8 long years of research passed before Kerberos, named after the three-headed Greek mythological dog known as Cerberus, was officially complete.
The result of MIT’s famous research became widely used as default authentication methods in popular operating systems. If you are running Windows 2000 or later, you are indeed running Kerberos by default. Other operating systems such as the Mac OS X also carry the Kerberos protocol. Kerberos isn’t just limited to operating systems, however, since it is employed by many of Cisco’s routers and switches.
What Does It Protect Against, Anyways?
If you have ever used an FTP program over a network, you are at risk. If you have ever used a Telnet program over a network, you are again at risk. These are just two examples of how little security some applications allow. FTP and Telnet use what are called plaintext passwords, or otherwise known as cleartext passwords. These passwords are ridiculously easy to intercept with the right tools.
Anyone with a simple packet sniffer and packet analyzer can obtain an FTP or telnet logon with ease. With that kind of sensitive information being transmitted, the need for Kerberos is obvious. This need doesn’t stop there, however. Sure FTP and Telnet related logons are easy to intercept, but then again so is every other connection any of your applications has to the internet.
Through a process of man in the middle attacks, any hacker can get most logon information for just about anything. From online bank passwords to private passwords on your computer, they are all generally vulnerable to this attack. A man in the middle attack generally occurs when the hacker acts as the “man in the middle” between two computers. The hacker attempts to pretend to each computer that it is in fact, the computer they have connected to. In reality, all the data is being routed to the hacker, who can then modify or add instructions to the data.
Okay, This Sounds Useful…But How Does It Work?
Kerberos operates by encrypting data with a symmetric key. A symmetric key is a type of authentication where both the client and server agree to use a single encryption/decryption key for sending or receiving data. When working with the encryption key, the details are actually sent to a key distribution center, or KDC, instead of sending the details directly between each computer. The entire process takes a total of eight steps, as shown below.
1. – The authentication service, or AS, receives the request by the client and verifies that the client is indeed the computer it claims to be. This is usually just a simple database lookup of the user’s ID.
2. – Upon verification, a timestamp is created. This puts the current time in a user session, along with an expiration date. The default expiration date of a timestamp is 8 hours. The encryption key is then created. The timestamp ensures that when 8 hours is up, the encryption key is useless. (This is used to make sure a hacker doesn’t intercept the data, and try to crack the key. Almost all keys are able to be cracked, but it will take a lot longer than 8 hours to do so)
3. – The key is sent back to the client in the form of a ticket-granting ticket, or TGT. This is a simple ticket that is issued by the authentication service. It is used for authenticating the client for future reference.
4. – The client submits the ticket-granting ticket to the ticket-granting server, or TGS, to get authenticated.
5. – The TGS creates an encrypted key with a timestamp, and grants the client a service ticket.
6. – The client decrypts the ticket, tells the TGS it has done so, and then sends its own encrypted key to the service.
7. – The service decrypts the key, and makes sure the timestamp is still valid. If it is, the service contacts the key distribution center to receive a session that is returned to the client.
It depends on what the server does. For example, a database server that has been handed a complete transaction will maintain a log to be able to redo its operations when recovering. However, there is no need to take checkpoints for the sake of the state of the distributed system.
Checkpointing is done only for local recovery
fill system model - In computing, a file system (often also written as filesystem) is a method for storing and organizing computer files and the data they contain to make it easy to find and access them. File systems may use a data storage device such as a hard disk or CD-ROM and involve maintaining the physical location of the files, they might provide access to data on a file server by acting as clients for a network protocol (e.g., NFS, SMB, or 9P clients), or they may be virtual and exist only as an access method for virtual data (e.g., procfs). It is distinguished from a directory service and registry.
More formally, a file system is a special-purpose database for the storage, organization, manipulation, and retrieval of data.
Aspects of file systems - Most file systems make use of an underlying data storage device that offers access to an array of fixed-size physical sectors, generally a power of 2 in size (512 bytes or 1, 2, or 4 KiB are most common). The file system software is responsible for organizing these sectors into files and directories, and keeping track of which sectors belong to which file and which are not being used. Most file systems address data in fixed-sized units called "clusters" or "blocks" which contain a certain number of disk sectors (usually 1-64). This is the smallest amount of disk space that can be allocated to hold a file.
However, file systems need not make use of a storage device at all. A file system can be used to organize and represent access to any data, whether it be stored or dynamically generated (e.g., procfs).
NFS Naming -
A fundamental idea of NFS is to provide transparent access to files, in this case by allowing them to mount a remote file system into its own file system
- Actually to allow users to mount part of a file system into their file system.
A downside of this is that users have different names for the same files
- Normally users namespaces would be partly standardized.
NFS Processes
– If a server crashed it was simple, with no necessary recovery stage.
– But, no guarantees can be offered to the client.
Distributed System Solved Paper Dec 2007
Unit- 1
Q1 a) What do you understand by transparency of distributed system ? what are the Different forms of transparency that are applied to distributed system ? Discuss the scalability problems and techniques to handle them . ( Marks 10 )
Ans Transparency of distributed system- Transparency means that any form of distributed system should hide its distributed nature from its users, appearing and functioning as a normal centralized system.
There are many types of transparency:
- Access transparency - Regardless of how resource access and representation has to be performed on each individual computing entity, the users of a distributed system should always access resources in a single, uniform way.
- Location transparency - Users of a distributed system should not have to be aware of where a resource is physically located.
- Migration transparency - Users should not be aware of whether a resource or computing entity possesses the ability to move to a different physical or logical location.
- Relocation transparency - Should a resource move while in use, this should not be noticeable to the end user.
- Replication transparency - If a resource is replicated among several locations, it should appear to the user as a single resource.
- Concurrent transparency - While multiple users may compete for and share a single resource, this should not be apparent to any of them.
- Failure transparency - Always try to hide any failure and recovery of computing entities and resources.
- Persistence transparency - Whether a resource lies in volatile or permanent memory should make no difference to the user.
- Security transparency - Negotiation of cryptographically secure access of resources must require a minimum of user intervention, or users will circumvent the security in preference of productivity.
Formal definitions of most of these concepts can be found in RM-ODP, the Open Distributed Processing Reference Model (ISO 10746).
The degree to which these properties can or should be achieved may vary widely. Not every system can or should hide everything from its users. For instance, due to the existence of a fixed and finite speed of light there will always be more latency on accessing resources distant from the user. If one expects real-time interaction with the distributed system, this may be very noticeable.
The degree to which these properties can or should be achieved may vary widely. Not every system can or should hide everything from its users. For instance, due to the existence of a fixed and finite speed of light there will always be more latency on accessing resources distant from the user. If one expects real-time interaction with the distributed system, this may be very noticeable.
Transparency
|
Description
|
Access
|
Hide differences in data representation and how a resource is accessed
|
Location
|
Hide where a resource is located
|
Migration
|
Hide that a resource may move to another location
|
Relocation
|
Hide that a resource may be moved to another location while in use
|
Replication
|
Hide that a resource may be shared by several competitive users
|
Concurrency
|
Hide that a resource may be shared by several competitive users
|
Failure
|
Hide the failure and recovery of a resource
|
Persistence
|
Hide whether a (software) resource is in memory or on disk
|
Q (b) Discuss the parameter passing mechanisms used in RPC .Briefly discuss Message oriented communication ( Marks 10 )
Ans Parameter Passing - The function of the client stub is to take its parameters, pack them into a message, and send them to the server stub. While this sounds straightforward, it is not quite as simple as it at first appears. In this section we will look at some of the issues concerned with parameter passing in RPC systems.
Passing Value Parameters - Packing parameters into a message is called parameter marshaling. As avery simple example, consider a remote procedure, add(i, j), that takes two integer parameters i and j and returns their arithmetic sum as a result. (As a practical matter, one would not normally make such a simple procedure remote due to the overhead, but as an example it will do.) The call to add, is shown in the left-hand portion (in the client process) in Fig. 2-3. The client stub takes its two parameters and puts them in a message as indicated. It also puts the name or number of the procedure to be called in the message because the server might support several different calls, and it has to be told which one is required
When the message arrives at the server ,the stub examine the message to see which procedure is needed and makes the appropriate call . if the server also supports other remote procedure the server stub might have a switch statement in it to select the procedure to be called ,depending upon the first field of the message the actual call form stub to the server looks much like the original client call , expect that the parameters are variable initialized from the coming
Passing Value Parameters - Packing parameters into a message is called parameter marshaling. As avery simple example, consider a remote procedure, add(i, j), that takes two integer parameters i and j and returns their arithmetic sum as a result. (As a practical matter, one would not normally make such a simple procedure remote due to the overhead, but as an example it will do.) The call to add, is shown in the left-hand portion (in the client process) in Fig. 2-3. The client stub takes its two parameters and puts them in a message as indicated. It also puts the name or number of the procedure to be called in the message because the server might support several different calls, and it has to be told which one is required
When the message arrives at the server ,the stub examine the message to see which procedure is needed and makes the appropriate call . if the server also supports other remote procedure the server stub might have a switch statement in it to select the procedure to be called ,depending upon the first field of the message the actual call form stub to the server looks much like the original client call , expect that the parameters are variable initialized from the coming
When the message arrives at the server ,the stub examine the message to see which procedure is needed and makes the appropriate call . if the server also supports other remote procedure the server stub might have a switch statement in it to select the procedure to be called ,depending upon the first field of the message the actual call form stub to the server looks much like the original client call , expect that the parameters are variable initialized from the coming
Message-oriented communication is a way of communicating between processes. Messages, which correspond to events, are the basic units of data delivered. Tanenbaum and Steen classified message-oriented communication according to two factors---synchronous or asynchronous communication, and transient or persistent communication. In synchronous communication, the sender blocks waiting for the receiver to engage in the exchange. Asynchronous communication does not require both the sender and the receiver to execute simultaneously. So, the sender and recipient are loosely-coupled. The amount of time messages are stored determines whether the communication is transient or persistent. Transient communication stores the message only while both partners in the communication are executing. If the next router or receiver is not available, then the message is discarded. Persistent communication, on the other hand, stores the message until the recipient receives it.
A typical example of asynchronous persistent communication is Message-Oriented Middleware (MOM). Message-oriented middleware is also called a message-queuing system, a message framework, or just a messaging system. MOM can form an important middleware layer for enterprise applications on the Internet. In the publish and subscribe model, a client can register as a publisher or a subscriber of messages. Messages are delivered only to the relevant destinations and only once, with various communication methods including one-to-many or many-to-many communication. The data source and destination can be decoupled under such a model.
The Java Message Service (JMS) from Sun Microsystems provides a common interface for Java applications to MOM implementations. Since JMS was integrated with the recent version of the Java 2 Enterprise Edition (J2EE) platform, Enterprise Java Beans (EJB)---the component architecture of J2EE---has a new type of bean, the message-driven bean. The JMS integration simplifies the enterprise development, allowing a decoupling between components.
{i} Data stream - Stream-based data management enables the efficient analysis and processing of large volumes of data in distributed environments. This book presents network-aware optimization techniques which allow an effective resource usage considering computational load and network bandwidth in a distributed data stream management system. The data stream sharing approach is thereby based on distributing query processing in the network and on sharing preprocessed data streams for satisfying multiple similar queries. To increase the possibilities for sharing, the extended approach of data stream widening is able to alter existing streams to additionally contain all the necessary data for a new query. Since data stream widening requires the treatment of disjunctive predicates, this book further describes methods for matching and evaluating such predicates. The book is thus a suitable source of information for computer scientists and engineers interested in the optimized processing, evaluation, and management of distributed data streams.
{ii} Specifying QOS -Delivering quality of service (QOS) guarantees in distributed systems is fundamentally an end-to-end issue, that is, from application-to-application. Consider, for example, the remote access and distribution of audio and video content from a web server: in the distributed system platform, quality of service assurances should apply to the complete flow of information from the remote server across the network to the point of delivery and play out
Generally, this requires end-to-end admission testing and resource reservation in the first instance, followed by careful co-ordination of disk and thread scheduling and flow control in the end-systems, packet/cell scheduling and congestion control in the network and, finally, active end-to-end monitoring and maintenance of the delivered quality of service.
{iii} Token bucket algorithm -
a) Tokens are generated at a constant rate
b) Token is fixed number of bytes that an application is allowed to pass to the network
b) Token is fixed number of bytes that an application is allowed to pass to the network
- tolerates larger bursts if traffic idle
- tokens to send data generated at fixed rate, collected in bucket
- data sent only if a certain number of tokens in bucket; token removed after sending
Ans
{ i } Multithreaded clients and Multithreaded servers - A multithreaded server is any server that has more than one thread. Because a transport requires its own thread, multithreaded servers also have multiple transports. The number of thread-transport pairs that a server contains defines the number of requests that the server can handle in parallel. You create the first transport within the task directly from the TService Definition instance and clone additional transports from this initial transport.
When a multithreaded server starts:
{ i } Multithreaded clients and Multithreaded servers - A multithreaded server is any server that has more than one thread. Because a transport requires its own thread, multithreaded servers also have multiple transports. The number of thread-transport pairs that a server contains defines the number of requests that the server can handle in parallel. You create the first transport within the task directly from the TService Definition instance and clone additional transports from this initial transport.
When a multithreaded server starts:
- The first thread in the task starts up and creates a TService Definition using TStandard Service Definition.
- This thread creates the first transport for the first dispatcher, directly or indirectly.
- The thread then creates more threads to receive multiple requests. Each thread can only accommodate one transport;multiple threads cannot share transports. You do not have to create multiple MRemote Dispatcher instances; however, as you can share MRemote Dispatcher instances between threads.
Instances that derive from MRemoteDispatcher and threads do not need to follow a pre-set model. MRemoteDispatcher is extremely lightweight, so you can base your model on the weight and semantics of the specific derivation of MRemoteDispatcher that you intend to use.
Some possible implementations of multithreaded servers include:
Some possible implementations of multithreaded servers include:
- One instance of an MRemote Dispatcher for each thread-transport pair
- One instance of an MRemote Dispatcher for the entire server (where the derivation provides its own synchronization for multithreaded access)
- A pool of dispatchers
{ii} Processes and light weight process - The term "Process" is often used with several different meanings. In this book, we stick to the usual OS textbook definition: a process is an instance of a program in execution. You might think of it as the collection of data structures that fully describes how far the execution of the program has progressed.
Processes are like human beings: they are generated, they have a more or less significant life, they optionally generate one or more child processes, and eventually they die. A small difference is that sex is not really common among processes each process has just one parent.From the kernel's point of view, the purpose of a process is to act as an entity to which system resources (CPU time, memory, etc.) are allocated.
When a process is created, it is almost identical to its parent. It receives a (logical) copy of the parent's address space and executes the same code as the parent, beginning at the next instruction following the process creation system call. Although the parent and child may share the pages containing the program code (text), they have separate copies of the data (stack and heap), so that changes by the child to a memory location are invisible to the parent (and vice versa).
While earlier Unix kernels employed this simple model, modern Unix systems do not. They support multithreaded applications user programs having many relatively independent execution flows sharing a large portion of the application data structures. In such systems, a process is composed of several user threads (or simply threads), each of which represents an execution flow of the process. Nowadays, most multithreaded applications are written using standard sets of library functions called pthread (POSIX thread) libraries .Older versions of the Linux kernel offered no support for multithreaded applications. From the kernel point of view, a multithreaded application was just a normal process. The multiple execution flows of a multithreaded application were created, handled, and scheduled entirely in User Mode, usually by means of a POSIX-compliant pthread library.
However, such an implementation of multithreaded applications is not very satisfactory. For instance, suppose a chess program uses two threads: one of them controls the graphical chessboard, waiting for the moves of the human player and showing the moves of the computer, while the other thread ponders the next move of the game. While the first thread waits for the human move, the second thread should run continuously, thus exploiting the thinking time of the human player. However, if the chess program is just a single process, the first thread cannot simply issue a blocking system call waiting for a user action; otherwise, the second thread is blocked as well. Instead, the first thread must employ sophisticated nonblocking techniques to ensure that the process remains runnable.
Linux uses lightweight processes to offer better support for multithreaded applications. Basically, two lightweight processes may share some resources, like the address space, the open files, and so on. Whenever one of them modifies a shared resource, the other immediately sees the change. Of course, the two processes must synchronize themselves when accessing the shared resource.
A straightforward way to implement multithreaded applications is to associate a lightweight process with each thread. In this way, the threads can access the same set of application data structures by simply sharing the same memory address space, the same set of open files, and so on; at the same time, each thread can be scheduled independently by the kernel so that one may sleep while another remains runnable. Examples of POSIX-compliant pthread libraries that use Linux's lightweight processes are Linux Threads, Native POSIX Thread Library (NPTL), and IBM's Next Generation Posix Threading Package (NGPT).
When a process is created, it is almost identical to its parent. It receives a (logical) copy of the parent's address space and executes the same code as the parent, beginning at the next instruction following the process creation system call. Although the parent and child may share the pages containing the program code (text), they have separate copies of the data (stack and heap), so that changes by the child to a memory location are invisible to the parent (and vice versa).
While earlier Unix kernels employed this simple model, modern Unix systems do not. They support multithreaded applications user programs having many relatively independent execution flows sharing a large portion of the application data structures. In such systems, a process is composed of several user threads (or simply threads), each of which represents an execution flow of the process. Nowadays, most multithreaded applications are written using standard sets of library functions called pthread (POSIX thread) libraries .Older versions of the Linux kernel offered no support for multithreaded applications. From the kernel point of view, a multithreaded application was just a normal process. The multiple execution flows of a multithreaded application were created, handled, and scheduled entirely in User Mode, usually by means of a POSIX-compliant pthread library.
However, such an implementation of multithreaded applications is not very satisfactory. For instance, suppose a chess program uses two threads: one of them controls the graphical chessboard, waiting for the moves of the human player and showing the moves of the computer, while the other thread ponders the next move of the game. While the first thread waits for the human move, the second thread should run continuously, thus exploiting the thinking time of the human player. However, if the chess program is just a single process, the first thread cannot simply issue a blocking system call waiting for a user action; otherwise, the second thread is blocked as well. Instead, the first thread must employ sophisticated nonblocking techniques to ensure that the process remains runnable.
Linux uses lightweight processes to offer better support for multithreaded applications. Basically, two lightweight processes may share some resources, like the address space, the open files, and so on. Whenever one of them modifies a shared resource, the other immediately sees the change. Of course, the two processes must synchronize themselves when accessing the shared resource.
A straightforward way to implement multithreaded applications is to associate a lightweight process with each thread. In this way, the threads can access the same set of application data structures by simply sharing the same memory address space, the same set of open files, and so on; at the same time, each thread can be scheduled independently by the kernel so that one may sleep while another remains runnable. Examples of POSIX-compliant pthread libraries that use Linux's lightweight processes are Linux Threads, Native POSIX Thread Library (NPTL), and IBM's Next Generation Posix Threading Package (NGPT).
{b} Discuss the need, advantages and disadvantages of Code migration Differentiate between weak and strong mobility.( 10 Marks)
Ans Need of Code migration- The need for migration of applications or databases in enterprises arises from changes in business demands or technology challenges either to improve operational efficiency or to manage risk. Many enterprises are straddled with the challenge of ensuring that investments in legacy systems do not get locked in proprietary and outdated technologies while migrating to newer systems. The need is to preserve established business rules and practices in the old system at the same time managing valuable human resources locked in maintaining legacy systems.
While options such as rewriting or buying new products exist, migration leverages the business model and the features of the application and can be done in a cost-effective manner.
BIS specializes in migrating applications to .NET, whether it involves migration of legacy 'Win32 (Visual Basic, Visual C++), Oracle, J2EE, FoxPro, or Powerbuilder, in to .NET and Java. With this expertise, BIS has the ability to convert code with the internally developed code migration tool “TOREDO” which in turn provides saving of nearly 40 - 60 % compared to the alternative of manual conversion, moreover all human related mistakes are avoided and the migration work is fully documented.
BIS offers a risk free migration path that can be integrated with your existing systems. Each project occurs within identified budget parameters and a strict timeline is followed to ensure you can promptly begin taking advantage of your technology investment.
Advantages
While options such as rewriting or buying new products exist, migration leverages the business model and the features of the application and can be done in a cost-effective manner.
BIS specializes in migrating applications to .NET, whether it involves migration of legacy 'Win32 (Visual Basic, Visual C++), Oracle, J2EE, FoxPro, or Powerbuilder, in to .NET and Java. With this expertise, BIS has the ability to convert code with the internally developed code migration tool “TOREDO” which in turn provides saving of nearly 40 - 60 % compared to the alternative of manual conversion, moreover all human related mistakes are avoided and the migration work is fully documented.
BIS offers a risk free migration path that can be integrated with your existing systems. Each project occurs within identified budget parameters and a strict timeline is followed to ensure you can promptly begin taking advantage of your technology investment.
Advantages
- Flexibility
- Enables dynamic configuration
- Better performance through steady distribution
Disadvantages
- Costly
- Intricate
weak and strong mobility -
Weak mobility model: In this model, it is possible to transfer only the code segment, along with perhaps some initialization data. Feature: a transferred program is always started from its initial state, e.g. Java applets.
Strong mobility model: Besides the code segment being transferred, the execution segment can be transferred as well. Feature: A running process can be stopped, subsequently moved to another machine, and then resume execution where it left off.
Weak mobility model: In this model, it is possible to transfer only the code segment, along with perhaps some initialization data. Feature: a transferred program is always started from its initial state, e.g. Java applets.
Strong mobility model: Besides the code segment being transferred, the execution segment can be transferred as well. Feature: A running process can be stopped, subsequently moved to another machine, and then resume execution where it left off.
{C} what is the concept of logical clocks ? Discuss the Lamport's approach for logical clock synchronization. ( 10 Marks )
Ans logical clocks -
Let’s say we have a logical clock, LCi for each processor. In this case, when ever an event happens, we shall increment LCi.
If a processor, X, sends a message to processor, Y, then processor X will also send LCX, which is that processors logical clock.
When processor Y receives this message, then we do:
If LCY < (LCX + 1):
LCY = LCX + 1
In order to update processor Y’s logical clock.
Lamport Clocks -
Let’s now say that we have a processor ‘A’ and ‘B’. There are a few things we can say:
Let’s say we have a logical clock, LCi for each processor. In this case, when ever an event happens, we shall increment LCi.
If a processor, X, sends a message to processor, Y, then processor X will also send LCX, which is that processors logical clock.
When processor Y receives this message, then we do:
If LCY < (LCX + 1):
LCY = LCX + 1
In order to update processor Y’s logical clock.
Lamport Clocks -
Let’s now say that we have a processor ‘A’ and ‘B’. There are a few things we can say:
- If A precedes (happens before) B, then we can write A –> B
- If A and B are concurrent events, then sadly we can’t say anything about their ordering.
If A –> B is true, then it must also be true that LCA < LCB. However, this is not the case. Just because LCA < LCB does not mean that A –> B. Therefore, we can say that we cannot infer a casual ordering of processors just by looking at their timestamps.
- In a distributed system, it is not possible in practice to synchronize time across entities (typically thought of as processes) within the system; hence, the entities can use the concept of a logical clock based on the events through which they communicate.
- If two entities do not exchange any messages, then they probably do not need to share a common clock; events occurring on those entities are termed as concurrent events.
- Among the processes on the same local machine we can order the events based on the local clock of the system.
- When two entities communicate by message passing, then the send event is said to 'happen before' the receive event, and the logical order can be established among the events.
- A distributed system is said to have partial order if we can have a partial order relationship among the events in the system. If 'totality', i.e., causal relationship among all events in the system can be established, then the system is said to have total order.
Ans {i} Strict consistency - Strict consistency in computer science is the most stringent consistency model.
It says that a read operation has to return the result of the latest write operation which occurred on that data item. This is only possible when a global clock exists. Since it's impossible to implement a global clock across nodes of a distributed system, this model has traditionally only been possible on a uniprocessor.
{ii} causal consistency - Causal consistency is one of the consistency models used in the domain of the concurrent programming (e.g. in distributed shared memory, distributed transactions etc).
A system provides causal consistency if memory operations that potentially are causally related are seen by every node of the system in the same order. Concurrent writes (i.e. ones that are not causally related) may be seen in different order by different nodes. This is weaker than sequential consistency, which requires that all nodes see all writes in the same order, but is stronger than PRAM consistency, which requires only writes done by a single node to be seen in the same order from every other node.
When a node performs a read followed later by a write, even on a different variable, the first operations is said to be causally ordered before the second, because the value stored by the write may have been dependent upon the result of the read. Similarly, a read operation is causally ordered after the earlier write on the same variable that stored the data retrieved by the read. Also, even two write operations performed by the same node are defined to be causally ordered, in the order they were performed. Intuitively, after writing value v into variable x, a node knows that a read of x would give v, so a later write could be said to be (potentially) causally related to the earlier one. Finally, we force this causal order to be transitive: that is, we say that if operation A is (causally) ordered before B, and B is ordered before C, A is ordered before C.
Operations that are not causally related, even through other operations, are said to be concurrent.
{iii} Weak consistency - The name weak consistency may be used in two senses In the first sense, strict and more popular, the weak consistency is one of the consistency models used in the domain of the concurrent programming (e.g. in distributed shared memory, distributed transactions etc.).
The protocol is said to support weak consistency if:
- All accesses to synchronization variables are seen by all processes (or nodes, processors) in the same order (sequentially) - these are synchronization operations. Accesses to critical sections are seen sequentially.
- All other accesses may be seen in different order on different processes (or nodes, processors).
- The set of both read and write operations in between different synchronization operations is the same in each process.
Therefore, there can be no access to synchronization variable if there are pending write operations. And there can not be any new read/write operation started if system is performing any synchronization operation.
In the second sense, more general, weak consistency may be applied to any consistency model weaker than sequential consistency.
In the second sense, more general, weak consistency may be applied to any consistency model weaker than sequential consistency.
Ans {i} Eventual consistency - Eventual consistency is one of the consistency models used in the domain of parallel programming, for example in distributed shared memory, distributed transactions, and optimistic replication .
The eventual consistency model states that, when no updates occur for a long period of time, eventually all updates will propagate through
the system and all the replicas will be consistent.
{ii} Process resilience -
The eventual consistency model states that, when no updates occur for a long period of time, eventually all updates will propagate through
the system and all the replicas will be consistent.
{ii} Process resilience -
- Processes can be made fault tolerant by arranging to have a group of processes, with each member of the group being identical
- A message sent to the group is delivered to all of the “copies” of the process (the group members), and then only one of them performs the required service
- If one of the processes fail, it is assumed that one of the others will still be able to function (and service any pending request or operation)
{C} Define the following terms : Public key, private key, Session key, symmetric key, And explain with the help of a block diagram how Authentication takes place in KERBEROS. ( 10 Marks )
Ans Public key - Public-key cryptography is a relatively new cryptographic approach whose distinguishing characteristic is the use of asymmetric key algorithms instead of or in addition to symmetric key algorithms. Unlike symmetric key algorithms, it does not require a secure initial exchange of one or more secret keys to both sender and receiver. The asymmetric key algorithms are used to create a mathematically related key pair: a secret private key and a published public key. Use of these keys allows protection of the authenticity of a message by creating a digital signature of a message using the private key, which can be verified using the public key. It also allows protection of the confidentiality and integrity of a message, by public key encryption, encryting the message using the public key, which can only be decrypted using the private key.
Public key cryptography is a fundamental and widely used technology around the world. It is the approach which is employed by many cryptographic algorithms and cryptosystems. It underlies such Internet standards as Transport Layer Security (TLS) (successor to SSL), PGP, and GPG.
private key - private key is made of the modulus n and the private (or decryption) exponent d which must be kept secret.
Public key cryptography is a fundamental and widely used technology around the world. It is the approach which is employed by many cryptographic algorithms and cryptosystems. It underlies such Internet standards as Transport Layer Security (TLS) (successor to SSL), PGP, and GPG.
private key - private key is made of the modulus n and the private (or decryption) exponent d which must be kept secret.
- All parts of the private key must be kept secret in this form. and are sensitive since they are the factors of , and allow computation of given . If and are not stored in this form of the private key then they are securely deleted along with other intermediate values from key generation.
- Although this form allows faster decryption and signing by using the Chinese Remainder Theorem (CRT) it is considerably less secure since it enables side channel attacks. This is a particular problem if implemented on smart cards, which benefit most from the improved efficiency. (Start with y = xemodn and let the card decrypt that. So it computes yd(mod p) or yd(mod q) whose results give some value z. Now, induce an error in one of the computations. Then gcd(z − x,n) will reveal p or q.)
symmetric key - Symmetric-key algorithms are a class of algorithms for cryptography that use trivially related, often identical, cryptographic keys for both decryption and encryption.
The encryption key is trivially related to the decryption key, in that they may be identical or there is a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link.
Other terms for symmetric-key encryption are secret-key, single-key, shared-key, one-key, and private-key encryption. Use of the last and first terms can create ambiguity with similar terminology used in public-key cryptography.
Types of symmetric-key algorithms
Symmetric-key algorithms can be divided into stream ciphers and block ciphers. Stream ciphers encrypt the bytes of the message one at a time, and block ciphers take a number of bytes and encrypt them as a single unit. Blocks of 64 bits have been commonly used; the Advanced Encryption Standard algorithm approved by NIST in December 2001 uses 128-bit blocks.
Some examples of popular and well-respected symmetric algorithms include Twofish, Serpent, AES (Rijndael), Blowfish, CAST5, RC4, TDES, and IDEA.
Authentication in KERBEROS -You may not know it, but your network is probably unsecured right now. Anyone with the right tools could capture, manipulate, and add data between the connections you maintain with the internet. The security cat and mouse game isn’t one sided, however. Network administrators are currently taking advantage of Kerberos to help combat security concerns.
Project Athena
Project Athena was initiated in 1983, when it was decided by the Massachusetts Institute of Technology that security in the TCP/IP model just wasn’t good enough. A total of 8 long years of research passed before Kerberos, named after the three-headed Greek mythological dog known as Cerberus, was officially complete.
The result of MIT’s famous research became widely used as default authentication methods in popular operating systems. If you are running Windows 2000 or later, you are indeed running Kerberos by default. Other operating systems such as the Mac OS X also carry the Kerberos protocol. Kerberos isn’t just limited to operating systems, however, since it is employed by many of Cisco’s routers and switches.
What Does It Protect Against, Anyways?
If you have ever used an FTP program over a network, you are at risk. If you have ever used a Telnet program over a network, you are again at risk. These are just two examples of how little security some applications allow. FTP and Telnet use what are called plaintext passwords, or otherwise known as cleartext passwords. These passwords are ridiculously easy to intercept with the right tools.
Anyone with a simple packet sniffer and packet analyzer can obtain an FTP or telnet logon with ease. With that kind of sensitive information being transmitted, the need for Kerberos is obvious. This need doesn’t stop there, however. Sure FTP and Telnet related logons are easy to intercept, but then again so is every other connection any of your applications has to the internet.
Through a process of man in the middle attacks, any hacker can get most logon information for just about anything. From online bank passwords to private passwords on your computer, they are all generally vulnerable to this attack. A man in the middle attack generally occurs when the hacker acts as the “man in the middle” between two computers. The hacker attempts to pretend to each computer that it is in fact, the computer they have connected to. In reality, all the data is being routed to the hacker, who can then modify or add instructions to the data.
Okay, This Sounds Useful…But How Does It Work?
Kerberos operates by encrypting data with a symmetric key. A symmetric key is a type of authentication where both the client and server agree to use a single encryption/decryption key for sending or receiving data. When working with the encryption key, the details are actually sent to a key distribution center, or KDC, instead of sending the details directly between each computer. The entire process takes a total of eight steps, as shown below.
1. – The authentication service, or AS, receives the request by the client and verifies that the client is indeed the computer it claims to be. This is usually just a simple database lookup of the user’s ID.
2. – Upon verification, a timestamp is created. This puts the current time in a user session, along with an expiration date. The default expiration date of a timestamp is 8 hours. The encryption key is then created. The timestamp ensures that when 8 hours is up, the encryption key is useless. (This is used to make sure a hacker doesn’t intercept the data, and try to crack the key. Almost all keys are able to be cracked, but it will take a lot longer than 8 hours to do so)
3. – The key is sent back to the client in the form of a ticket-granting ticket, or TGT. This is a simple ticket that is issued by the authentication service. It is used for authenticating the client for future reference.
4. – The client submits the ticket-granting ticket to the ticket-granting server, or TGS, to get authenticated.
5. – The TGS creates an encrypted key with a timestamp, and grants the client a service ticket.
6. – The client decrypts the ticket, tells the TGS it has done so, and then sends its own encrypted key to the service.
7. – The service decrypts the key, and makes sure the timestamp is still valid. If it is, the service contacts the key distribution center to receive a session that is returned to the client.
8. – The client decrypts the ticket. If the keys are still valid, communication is initiated between client and server.
Is all that back-and-forth communication really necessary? When concerning speed and reliability, it is entirely necessary. After the communication is made between the client and server, no further need of transmitting logon information is needed. The client is authenticated until the session expires.
Is all that back-and-forth communication really necessary? When concerning speed and reliability, it is entirely necessary. After the communication is made between the client and server, no further need of transmitting logon information is needed. The client is authenticated until the session expires.
4.{a} Discuss the object model of CORBA and discuss the Service provided by CORBA system. ( 10 Marks )
Ans
Objects and services are specified by implementing an IDL (Interface Definition Language). IDL is a formal language used to define object interfaces independent of the programming language used to implement those methods. This is important because CORBA supports a variety of programming languages, and either the client, or server could be implemented in a wide variety of languages and running on different computer architectures.
From the IDL specification, the compiler then creates a client side code stub, and a server side code skeleton. The client stub provides the client with an interface to the remote object (a proxy), and provides marshalling instructions for client object resource broker (ORB). The server's skeleton is basically a method interface for the servers ORB so that the ORB knows which methods are available. The server skeleton also provides unmarshalling instructions to the server ORB so that the server can unravel the clients method parameters. The process is reversed when returning a value from the remote object to the client.
Proxy: A server between a client application, such as a Web browser, and a real server. It intercepts all requests to the real server to see if it can fulfil the requests itself. If not, it forwards the request to the real server. Proxy servers have two main purposes: improve performance and filter requests.
The core of CORBA is the object resource broker (ORB). The ORB is the principal component for the transmission of information between the client and the server of the CORBA application. The ORB manages marshalling requests via the code stubs provided by the IDL, establishes a connection to the server, sends the data, and executes the requests on the server side. The same process occurs when the server wants to return the results of the operation.
In order for the client ORB to be able to locate the server with the appropriate resource, CORBA implements a naming service. The naming service is an application that runs as a background process on a remote server at a well known endpoint. This service is responsible for maintaining a lookup table for all of the services running in the distributed computer.
From the IDL specification, the compiler then creates a client side code stub, and a server side code skeleton. The client stub provides the client with an interface to the remote object (a proxy), and provides marshalling instructions for client object resource broker (ORB). The server's skeleton is basically a method interface for the servers ORB so that the ORB knows which methods are available. The server skeleton also provides unmarshalling instructions to the server ORB so that the server can unravel the clients method parameters. The process is reversed when returning a value from the remote object to the client.
Proxy: A server between a client application, such as a Web browser, and a real server. It intercepts all requests to the real server to see if it can fulfil the requests itself. If not, it forwards the request to the real server. Proxy servers have two main purposes: improve performance and filter requests.
The core of CORBA is the object resource broker (ORB). The ORB is the principal component for the transmission of information between the client and the server of the CORBA application. The ORB manages marshalling requests via the code stubs provided by the IDL, establishes a connection to the server, sends the data, and executes the requests on the server side. The same process occurs when the server wants to return the results of the operation.
In order for the client ORB to be able to locate the server with the appropriate resource, CORBA implements a naming service. The naming service is an application that runs as a background process on a remote server at a well known endpoint. This service is responsible for maintaining a lookup table for all of the services running in the distributed computer.
{b} Briefly present the evolution stages of DCOM. How Communication takes place in DCOM ? discuss in Detail ( 10 Marks )
Ans DCOM - There is no doubt that there is great demand for large-scale distributed applications. Indeed, tremendously expensive special-purpose distributed systems have been deployed and today are used extensively in the banking, airline, and telecommunication industries. The major barrier to supporting these, and even richer, applications on the Internet is the difficulty of designing, building, testing, and maintaining distributed applications using the tools that comprise the state-of-the-art today.
Our proposal is to develop tools that will enable developers to realize scalable distributed applications on the Internet. The life cycle of a distributed application can typically be viewed as having four stages:
1. Design stage
2. Implementation and testing stage
3. Deployment and utilization stage
4. Maintenance and evolution stage
Creative Solutions develops diffrent tools to assist you with each of these four stages as:
1. We approach to helping developers design applications is to provide a set of general-purpose building blocks from which more complex systems can be composed.
2. To facilitate implementation, we plan to develop a methodology for whole-system simulation using true client behavior in highly realistic network conditions.
3. Deploying network applications today is a painfully manual process and prone to error. To reduce this hurdle, we propose to create a shared infrastructure that software developers will employ during the deployment and the maintenance and evolution stages.
4. Finally, we plan to develop a set of tools for monitoring distributed applications that will improve their long-term reliability by reporting on their behavior (and failures).
How Communication takes place in DCOM - Most companies have not taken full advantage of multi-tiered (n-Tier) architectures. The guiding principles of distributed multi-tiered architectures like J2EE and .net / Windows DNA are Web computing; faster time to market; true interoperability; Scalability, reduced complexity; language, tool, and hardware independence; and lower cost of ownership.
For the distributed applications development, Creative Solutions employs component technologies like COM, DCOM, Enterprise Java Beans, RMI and CORBA and UML, Design patterns for software Design.
The .NET Framework provides access to technologies that enable developers to build distributed applications. We use .NET to take full advantage.
Our proposal is to develop tools that will enable developers to realize scalable distributed applications on the Internet. The life cycle of a distributed application can typically be viewed as having four stages:
1. Design stage
2. Implementation and testing stage
3. Deployment and utilization stage
4. Maintenance and evolution stage
Creative Solutions develops diffrent tools to assist you with each of these four stages as:
1. We approach to helping developers design applications is to provide a set of general-purpose building blocks from which more complex systems can be composed.
2. To facilitate implementation, we plan to develop a methodology for whole-system simulation using true client behavior in highly realistic network conditions.
3. Deploying network applications today is a painfully manual process and prone to error. To reduce this hurdle, we propose to create a shared infrastructure that software developers will employ during the deployment and the maintenance and evolution stages.
4. Finally, we plan to develop a set of tools for monitoring distributed applications that will improve their long-term reliability by reporting on their behavior (and failures).
How Communication takes place in DCOM - Most companies have not taken full advantage of multi-tiered (n-Tier) architectures. The guiding principles of distributed multi-tiered architectures like J2EE and .net / Windows DNA are Web computing; faster time to market; true interoperability; Scalability, reduced complexity; language, tool, and hardware independence; and lower cost of ownership.
For the distributed applications development, Creative Solutions employs component technologies like COM, DCOM, Enterprise Java Beans, RMI and CORBA and UML, Design patterns for software Design.
The .NET Framework provides access to technologies that enable developers to build distributed applications. We use .NET to take full advantage.
{C} What is the difference between stateless and stateful Servers ?Discuss the fill system model, processes and naming In Sun net work file system ( 10 Marks )
Ans Stateful Server
- A stateful server maintains stateful information on each active client.
- Stateful information can reduce the data exchanged, and thereby the response time.
Stateful vs. Stateless Server
- Stateless server is straightforward to code.
- Stateful server is harder to code, but the state information maintained by the server can reduce the data exchanged, and allows enhancements to a basic service.
- Maintaining stateful information is difficult in the presence of failures
It depends on what the server does. For example, a database server that has been handed a complete transaction will maintain a log to be able to redo its operations when recovering. However, there is no need to take checkpoints for the sake of the state of the distributed system.
Checkpointing is done only for local recovery
fill system model - In computing, a file system (often also written as filesystem) is a method for storing and organizing computer files and the data they contain to make it easy to find and access them. File systems may use a data storage device such as a hard disk or CD-ROM and involve maintaining the physical location of the files, they might provide access to data on a file server by acting as clients for a network protocol (e.g., NFS, SMB, or 9P clients), or they may be virtual and exist only as an access method for virtual data (e.g., procfs). It is distinguished from a directory service and registry.
More formally, a file system is a special-purpose database for the storage, organization, manipulation, and retrieval of data.
Aspects of file systems - Most file systems make use of an underlying data storage device that offers access to an array of fixed-size physical sectors, generally a power of 2 in size (512 bytes or 1, 2, or 4 KiB are most common). The file system software is responsible for organizing these sectors into files and directories, and keeping track of which sectors belong to which file and which are not being used. Most file systems address data in fixed-sized units called "clusters" or "blocks" which contain a certain number of disk sectors (usually 1-64). This is the smallest amount of disk space that can be allocated to hold a file.
However, file systems need not make use of a storage device at all. A file system can be used to organize and represent access to any data, whether it be stored or dynamically generated (e.g., procfs).
NFS Naming -
A fundamental idea of NFS is to provide transparent access to files, in this case by allowing them to mount a remote file system into its own file system
- Actually to allow users to mount part of a file system into their file system.
A downside of this is that users have different names for the same files
- Normally users namespaces would be partly standardized.
• File Handles
– File handles are unique identifiers for files (128bytes in v4).
– Every file has a unique identifier.
– This means that after the first lookup based on the file name, subsequent lookups can use the files handle.
– In this way the file can be independent of its current name.
– File handles are unique identifiers for files (128bytes in v4).
– Every file has a unique identifier.
– This means that after the first lookup based on the file name, subsequent lookups can use the files handle.
– In this way the file can be independent of its current name.
NFS Naming – File Attributes
Mandatory Attributes
|
Description
|
TYPE
|
The type of the file (regular, directory, symbolic link)
|
SIZE
|
The length of the file in bytes
|
CHANGE
|
Indicator for a client to see if and/or when the file has changed
|
FSID
|
Server-unique identifier of the file's file system
|
Recommended Attributes
|
Description
|
ACL
|
an access control list associated with the file
|
FILEHANDLE
|
The server-provided file handle of this file
|
FILEID
|
A file-system unique identifier for this file
|
FS_LOCATIONS
|
Locations in the network where this file system may be found
|
OWNER
|
The character-string name of the file's owner
|
TIME_ACCESS
|
Time when the file data were last accessed
|
TIME_MODIFY
|
Time when the file data were last modified
|
TIME_CREATE
|
Time when the file was created
|
NFS Processes
- Early versions of NFS were designed to be stateless;
– If a server crashed it was simple, with no necessary recovery stage.
– But, no guarantees can be offered to the client.
– Not all functionality worked with a stateless server, such as file locking.
• Again as NFS is used over WANs, it is useful to allow clients to use caches, for which servers should maintain state.
Ans DSM servers -
- In-server ◦ Receives messages from remote DSM servers and takes appropriate action. (E.g. Invalidate its copy of a page)
- Out-server ◦Receives requests from the DSM subsystem and communicates with its peer DSM servers at remote nodes. Note that the DSM subsystem itself does not directly communicate over the network with other hosts.
- Communication with key Server.
Ans Web as a Distributed System - The World Wide Web is a large distributed system. In 1998 comprises 70-75% of Internet traffic. With large transfers of streaming media and p2p, no longer a majority of bytes, but is in terms of flow. Active area of personal research. Look at my home page for online papers.
{C} Explain the notion of co –ordination in distributed Systems and present an overview of JINI. ( 10 Marks )
Ans JINI - Jini is a distributed system that consists of a mixture of different but related elements.It is strongly related to the Java programming language, although many of its principles can be implemented equally well in other languages. An important part of the system is formed by a coordination model for generative communication.We first discuss this model before giving the overall architecture of a typical Jini system.
The genral organization of java space in JINI
Architecture -JavaSpaces form only part of a Jini system. Like TIB/Rendezvous, Jini is aimed at providing a small, useful set of facilities and services that will allow the construction of distributed applications. A distributed application using Jini is often described as a loose federation of devices, processes, and services. All communication in current Jini systems is based on Java RMI.
Architecture -JavaSpaces form only part of a Jini system. Like TIB/Rendezvous, Jini is aimed at providing a small, useful set of facilities and services that will allow the construction of distributed applications. A distributed application using Jini is often described as a loose federation of devices, processes, and services. All communication in current Jini systems is based on Java RMI.
Comments
Post a Comment