Past Achievements And Future Trends In Databases Information Technology Essay

Published: 2021-07-30 15:10:07
essay essay

Category: Information Technology

Type of paper: Essay

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Hey! We can write a custom essay for you.

All possible types of assignments. Written by academics

In present time Database Management Systems (DBMS) technology is a key technology in most Systems Integration projects. The fast pace of development in this area makes it difficult for SI professionals to keep up with the latest advances, and to appreciate the limitations of the present generation of products. This paper briefly discusses the past achievements of database research and development and comments on the current status of database technology. We then focus on new directions in this area and discuss the challenges presented by new types of applications. The existing relational DBMS (RDBMS) technology has been successfully applied to many application domains. RDBMS technology has proved to be an effective solution for data management requirements in large and small organizations, and today this technology form a key component of most information systems. However, the advances in computer hardware, and the emergence of new application requirements such multimedia and mobile databases produced a situation where the basic underlying principles of data management need to be re-evaluated.
WAL – Write Ahead Logging, Semantics, Federated, Indexing, Query optimization, Transaction management, 3D- imaging.
A database management system (DBMS) consists of software that organizes the storage of data. A DBMS controls the creation, maintenance, and use of the database storage structures of social organizations and of their users. It allows organizations to place control of organization wide database development in the hands of Database Administrators (DBAs) and other specialists. In large systems, a DBMS allows users and other software to store and retrieve data in a structured way.
Database management systems are usually categorized according to the database model that they support, such as the network, relational or object model. The model tends to determine the query languages that are available to access the database. One commonly used query language for the relational database is SQL, although SQL syntax and function can vary from one DBMS to another. A common query language for the object database is OQL, although not all vendors of object databases implement this, majority of them do implement this method. A great deal of the internal engineering of a DBMS is independent of the data model, and is concerned with managing factors such as performance, concurrency, integrity, and recovery from hardware failures. In these areas there are large differences between the products.
A relational database management system (RDBMS) implements features of the relational model. In this context, Date's "Information Principle" states: "the entire information content of the database is represented in one and only one way. Namely as explicit values in column positions (attributes) and rows in relations (tuples). Therefore, there are no explicit pointers between related tables." This contrasts with the object database management system (ODBMS), which does store explicit pointers between related types.
Some technology observers see RDBMS technology as obsolete in the context of today's application requirements and advocate a shift towards Object-Oriented (OO) databases. The traditional RDBMS extended to include Object Oriented
concepts and structures such as abstract data type, nested tables and varying arrays. ORDBMS was created to handle new types of data such as audio, video, and image files that relational databases were not equipped to handle. In addition, its development was the result of increased usage of object-oriented programming languages, and a large mismatch between these and the DBMS software.
The OO approach and the associated OO technologies are often seen as a universal solution in computing today, including database. While complex objects play an important role in many applications, they represent only a subset of the problems which database technology needs to address. In this paper we attempt to provide a balanced view on new trends in database technology. We discuss not only the requirement for support of complex objects, but also two other application areas, high-volume databases and mobile databases, which are being successfully addressed by extending relational database technology.
We firstly review the past achievements of database research and development and discuss the present status of database technology.
Research and development in the area of database technology during the past decade is characterized by the striving for better support for applications beyond the traditional world, where primarily high volumes of simply structured data had to be processed efficiently. As a result, future DBMS will include more functionality, and explicitly cover more real world semantics (in various forms) that otherwise would have to be included in applications themselves.During the last decade the early versions of RDBMS systems have evolved into mature database server technology with capability to support distributed applications and operate in heterogeneous environments. As a result of these developments, the present generation of database technology addresses the requirements of most business-style applications. Advanced database technology, however, is in a sense ambivalent. While it provides new and much-needed solutions in many important areas, these same solutions often require thorough consideration in order to avoid the introduction of new problems. One such area is database security. In this paper, we consider three prominent areas of nonstandard database technology: object-oriented, active(DBMS having triggers) and federated(a type of meta-database management system) database management systems. In particular, we show which typical security problems (with the focus on access control) have to be solved for these systems. We briefly review the main achievements of database research and development in the following sections.
All of these databases can take advantage of indexing to increase their speed. This technology has advanced tremendously since its early uses in the 1960s and 1970s. The most common kind of index uses a sorted list of the contents of some particular table column, with pointers to the row associated with the value. An index allows a set of table rows matching some criterion to be quickly located. Typically, indexes are also stored in the various forms of data-structure mentioned above (such as B-trees, hashes, and linked lists). Usually, a database designer selects specific techniques to increase efficiency in the particular case of the type of index required.
Most relational DBMSs and some object DBMSs have the advantage that indexes can be created or dropped without changing existing applications making use of them, The database chooses between many different strategies based on which one it estimates will run the fastest. In other words, indexes act transparently to the application or end-user querying the database; while they affect performance, any SQL command will run with or without indexes to compute the result of an SQL statement. The RDBMS will produce a query plan of how to execute the query: often generated by analyzing the run times of the different algorithms and select the quickest process.
An index speeds up access to data, but it has disadvantages as well. First, every index increases the amount of storage used on the hard drive which is also necessary for the database file, and second, the index must be updated each time the data are altered, and this costs time. (Thus an index saves time in the reading of data, but it costs time in entering and altering data. It thus depends on the use to which the data are to be put whether an index is overall a net plus or minus in the quest for efficiency.)
A special case of an index is a primary index based on a primary key: a primary index must ensure a unique reference to a record. Often, for this purpose one simply uses a running index-number (ID number). Primary indexes play a significant role in relational databases, and they can speed up access to data considerably.
In any Relational Database Systems, Query performance is dependent not only on the database structure, but also on the methods in which the query is optimized. We show various classes of syntactically equivalent SQL queries, each of which can show off significant differences in data access depending on the features of the query formulation and the success of the database query optimizer.
Similar looking queries can take extensively different times to execute. We conclude that on-line analytical processing systems must not depend on dynamic user specified SQL queries if consistent overall system performance is required. We know that if SQL queries structured dynamically from user input, then system designers will not be capable to guarantee performance. Even if you use faster servers, this has been confirmed to be a small factor compared to the speed of the algorithm used. Therefore, the solution always lies in optimization.
SQL is a declarative language that does not specify the implementation details of database operations and uses query optimization to determine an efficient data access plan. Early versions of relational DBMSs were often criticized for poor performance. Advanced query optimization techniques based on extensive research combined with hardware advances make relational DBMSs the fastest available database technology today, suitable for operation in environments where high-level of performance is mandatory. More recently, query optimization techniques were developed for distributed databases, and effective solutions exist today for running queries across multiple databases. Further work on query optimization is likely to focus on producing techniques capable of taking full advantage of multiprocessor computer architectures.
Concurrent execution of user programs is essential for good DBMS performance. Because disk accesses are frequent, and relatively slow, it is important to keep the CPU humming by working on several user programs concurrently. A user’s program may carry out many operations on the data retrieved from the database, but the DBMS is only concerned about what data is read/written from/to the database. A transaction is the DBMS’s abstract view of a user program:  a sequence of reads and writes. Users submit transactions, and can think of each transaction as executing by itself. Concurrency is achieved by the DBMS, which interleaves actions (reads/writes of DB objects) of various transactions. Each transaction must leave the database in a consistent state if the DB is consistent when the transaction begins. DBMS will enforce some ICs, depending on the ICs declared in CREATE TABLE statements. Beyond this, the DBMS does not really understand the semantics of the data.  (e.g., it does not understand how the interest on a bank account is computed). A transaction might commit after completing all its actions, or it could abort (or be aborted by the DBMS) after executing some actions. A very important property guaranteed by the DBMS for all transactions is that they are atomic.  Write-ahead logging (WAL) is used to undo the actions of aborted transactions and to restore the system to a consistent state after a crash. The two key problems associated with transaction management: concurrency and recovery have been effectively solved resulting in reliable and fast database technology capable of supporting large numbers of concurrent users.
Most DBMS systems today use row-level (record-level) locking, group commits and other techniques resulting in high overall transaction rates. Similar to query optimization, transaction management techniques were extended to handle transactions spanning multiple database sites. Reliable recovery in distributed database environments is implemented using 2PC (two phase commit) protocol which maintains database consistency following failures during distributed update operations.
Recent developments in 3D technologies and measurement instrumentation combined with multimedia databases offer today new possibilities for the integrated and complete description of Cultural Heritage objects. A first attempt is made to develop a database for archaeological ceramic artifacts where in addition to the digitized 2D and 3D image, description, typological characteristics and historical information for each artifact now it will include also point-wise data. As a first example we will include physicochemical data mapped on the surface of the 3D digital image of the object. Thus, the researcher will have at his disposal the entire information regarding the specific artifact. This information will contribute significantly to the comparative study, provenance, determination of weathering; authentication and detection of forgery, inspection of past restorations etc. (MMDBMS) must support multimedia data types in addition to providing facilities for traditional DBMS
functions like database creation, data modeling, data retrieval, data access and organization, and data
independence. The area and applications have experienced tremendous growth. Especially with the rapid
development of network technology, multimedia database system gets more tremendous development and
multimedia information exchange becomes very important. This paper reviews the history and the current
State-of-the-art in MMDBMS.

Warning! This essay is not original. Get 100% unique essay within 45 seconds!


We can write your paper just for 11.99$

i want to copy...

This essay has been submitted by a student and contain not unique content

People also read