Wednesday 26 June 2013

System Security


System Security

The protection of computer based resources that includes hardware, software, data, procedures and people against unauthorized use or natural
Disaster is known as System Security.
System Security can be divided into four related issues:
  • Security
  • Integrity
  • Privacy
  • Confidentiality

SYSTEM SECURITY refers to the technical innovations and procedures applied to the hardware and operation systems to protect against deliberate or accidental damage from a defined threat.

DATA SECURITY is the protection of data from loss, disclosure, modification and destruction.

SYSTEM INTEGRITY refers to the power functioning of hardware and programs, appropriate physical security and safety against external threats such as eavesdropping and wiretapping.

PRIVACY defines the rights of the user or organizations to determine what information they are willing to share with or accept from others and how the organization can be protected against unwelcome, unfair or excessive dissemination of information about it.

CONFIDENTIALITY is a special status given to sensitive information in a database to minimize the possible invasion of privacy. It is an attribute of information that characterizes its need for protection.

SECURITY IN SOFTWARE:

System security refers to various validations on data in form of checks and controls to avoid the system from failing. It is always important to ensure that only valid data is entered and only valid operations are performed on the system. The system employees two types of checks and controls i.e

CLIENT SIDE VALIDATION:

Various client side validations are used to ensure on the client side that only valid data is entered. Client side validation saves server time and load to handle invalid data. Some checks imposed are

·         JavaScript in used to ensure those required fields are filled with suitable data only. Maximum lengths of the fields of the forms are appropriately defined.
·         Forms cannot be submitted without filling up the mandatory data so that manual mistakes of submitting empty fields that are mandatory can be sorted out at the client side to save the server time and load.
·         Tab-indexes are set according to the need and taking into account the ease of user while working with the system.


SERVER SIDE VALIDATION:

Some checks cannot be applied at client side. Server side checks are necessary to save the system from failing and intimating the user that some invalid operation has been performed or the performed operation is restricted. Some of the server side checks imposed is

·         Server side constraint has been imposed to check for the validity of primary key and foreign key. A primary key value cannot be duplicated. Any attempt to duplicate the primary value results into a message intimating the user about those values through the forms using foreign key can be updated only of the existing foreign key values.

·         User is intimating through appropriate messages about the successful operations or exceptions occurring at server side.

·         Various Access Control Mechanisms have been built so that one user may not agitate upon another. Access permissions to various types of users are controlled according to the organizational structure. Only permitted users can log on to the system and can have access according to their category. User- name, passwords and permissions are controlled o the server side.

·         Using server side validation, constraints on several restricted operations are imposed.

Posted on 10:05 | Categories:

System Design and Normalization


System Design:

Software design sits at the technical kernel of the software engineering process and is applied regardless of the development paradigm and area of application. Design is the first step in the development phase for any engineered product or system. The designer’s goal is to produce a model or representation of an entity that will later be built. Beginning, once system requirement have been specified and analyzed, system design is the first of the three technical activities -design, code and test that is required to build and verify software. 

The importance can be stated with a single word “Quality”. Design is the place where quality is fostered in software development. Design provides us with representations of software that can assess for quality. Design is the only way that we can accurately translate a customer’s view into a finished software product or system. Software design serves as a foundation for all the software engineering steps that follow. Without a strong design we risk building an unstable system – one that will be difficult to test, one whose quality cannot be assessed until the last stage.

During design, progressive refinement of data structure, program structure, and procedural details are developed reviewed and documented. System design can be viewed from either technical or project management perspective. From the technical point of view, design is comprised of four activities – architectural design, data structure design, interface design and procedural design.

Normalization:

It is a process of converting a relation to a standard form.  The process is used to handle the problems that can arise due to data redundancy i.e. repetition of data in the database, maintain data integrity as well as handling problems that can arise due to insertion, updation, deletion anomalies.

Decomposing is the process of splitting relations into multiple relations to eliminate anomalies and maintain anomalies and maintain data integrity.  To do this we use normal forms or rules for structuring relation.

Insertion anomaly: Inability to add data to the database due to absence of other data.
Deletion anomaly: Unintended loss of data due to deletion of other data.
Update anomaly: Data inconsistency resulting from data redundancy and partial update
Normal Forms:  These are the rules for structuring relations that eliminate anomalies.

FIRST NORMAL FORM:

          A relation is said to be in first normal form if the values in the relation are atomic for every attribute in the relation.  By this we mean simply that no attribute value can be a set of values or, as it is sometimes expressed, a repeating group.

SECOND NORMAL FORM:

          A relation is said to be in second Normal form is it is in first normal form and it should satisfy any one of the following rules.
1)   Primary key is a not a composite primary key
2)   No non key attributes are present
3)   Every non key attribute is fully functionally dependent on full set of primary key.

THIRD NORMAL FORM:
A relation is said to be in third normal form if their exits no transitive dependencies.

Transitive Dependency:  If two non key attributes depend on each other as well as on the primary key then they are said to be transitively dependent.
          The above normalization principles were applied to decompose the data in multiple tables thereby making the data to be maintained in a consistent state.

Posted on 10:00 | Categories:

SQL Server


SQL SERVER

       A database management, or DBMS, gives the user access to their data and helps them transform the data into information. Such database management systems include dBase, paradox, IMS, SQL Server and SQL Server.  These systems allow users to create, update and extract information from their database.
          A database is a structured collection of data.  Data refers to the characteristics of people, things and events.  SQL Server stores each data item in its own fields.  In SQL Server, the fields relating to a particular person, thing or event are bundled together to form a single complete unit of data, called a record (it can also be referred to as raw or an occurrence).  Each record is made up of a number of fields.  No two fields in a record can have the same field name.
          During an SQL Server Database design project, the analysis of your business needs identifies all the fields or attributes of interest.  If your business needs change over time, you define any additional fields or change the definition of existing fields.

SQL SERVER TABLES
          SQL Server stores records relating to each other in a table.  Different tables are created for the various groups of information. Related tables are grouped together to form a database.

PRIMARY KEY
          Every table in SQL Server has a field or a combination of fields that uniquely identifies each record in the table.  The Unique identifier is called the Primary Key, or simply the Key.  The primary key provides the means to distinguish one record from all other in a table.  It allows the user and the database system to identify, locate and refer to one particular record in the database.

RELATIONAL DATABASE

          Sometimes all the information of interest to a business operation can be stored in one table.  SQL Server makes it very easy to link the data in multiple tables. Matching an employee to the department in which they work is one example.  This is what makes SQL Server a relational database management system, or RDBMS.  It stores data in two or more tables and enables you to define relationships between the table and enables you to define relationships between the tables.


FOREIGN KEY

          When a field is one table matches the primary key of another field is referred to as a foreign key.  A foreign key is a field or a group of fields in one table whose values match those of the primary key of another table.


REFERENTIAL INTEGRITY

          Not only does SQL Server allow you to link multiple tables, it also maintains consistency between them.  Ensuring that the data among related tables is correctly matched is referred to as maintaining referential integrity.

DATA ABSTRACTION

          A major purpose of a database system is to provide users with an abstract view of the data.  This system hides certain details of how the data is stored and maintained. Data abstraction is divided into three levels.
Physical level:  This is the lowest level of abstraction at which one describes how the data are actually stored.
Conceptual Level:  At this level of database abstraction all the attributed and what data are actually stored is described and entries and relationship among them.
View level:  This is the highest level of abstraction at which one describes only part of the database.

ADVANTAGES OF RDBMS

·         Redundancy can be avoided
·         Inconsistency can be eliminated
·         Data can be Shared
·         Standards can be enforced
·         Security restrictions can be applied
·         Integrity can be maintained
·         Conflicting requirements can be balanced
·         Data independence can be achieved.


DISADVANTAGES OF DBMS

          A significant disadvantage of the DBMS system is cost.  In addition to the cost of purchasing of developing the software, the hardware has to be upgraded to allow for the extensive programs and the workspace required for their execution and storage.  While centralization reduces duplication, the lack of duplication requires that the database be adequately backed up so that in case of failure the data can be recovered.

FEATURES OF SQL SERVER (RDBMS)

          SQL SERVER is one of the leading database management systems (DBMS) because it is the only Database that meets the uncompromising requirements of today’s most demanding information systems.  From complex decision support systems (DSS) to the most rigorous online transaction processing (OLTP) application, even application that require simultaneous DSS and OLTP access to the same critical data, SQL Server leads the industry in both performance and capability.

SQL SERVER is a truly portable, distributed, and open DBMS that delivers unmatched performance, continuous operation and support for every database.
SQL SERVER RDBMS is high performance fault tolerant DBMS which is specially designed for online transactions processing and for handling large database application.

SQL SERVER with transactions processing option offers two features which contribute to very high level of transaction processing throughput, which are
the row level lock manager


ENTERPRISE WIDE DATA SHARING

          The unrivaled portability and connectivity of the SQL SERVER DBMS enables all the systems in the organization to be linked into a singular, integrated computing resource.

PORTABILITY

          SQL SERVER is fully portable to more than 80 distinct hardware and operating systems platforms, including UNIX, MSDOS, OS/2, Macintosh and dozens of proprietary platforms.  This portability gives complete freedom to choose the database server platform that meets the system requirements.

OPEN SYSTEMS

          SQL SERVER offers a leading implementation of industry –standard SQL.  SQL Server’s open architecture integrates SQL SERVER and non –SQL SERVER DBMS with industry’s most comprehensive collection of tools, application, and third party software products SQL Server’s Open architecture provides transparent access to data from other relational database and even non-relational database.

DISTRIBUTED DATA SHARING

          SQL Server’s networking and distributed database capabilities to access data stored on remote server with the same ease as if the information was stored on a single local computer.  A single SQL statement can access data at multiple sites. You can store data where system requirements such as performance, security or availability dictate.


UNMATCHED PERFORMANCE

          The most advanced architecture in the industry allows the SQL SERVER DBMS to deliver unmatched performance.

SOPHISTICATED CONCURRENCY CONTROL

          Real World applications demand access to critical data.  With most database Systems application becomes “contention bound” – which performance is limited not by the CPU power or by disk I/O, but user waiting on one another for data access . SQL Server employs full, unrestricted row-level locking and contention free queries to minimize and in many cases entirely eliminates contention wait times.

NO I/O BOTTLENECKS

          SQL Server’s fast commit groups commit and deferred write technologies dramatically reduce disk I/O bottlenecks. While some database write whole data block to disk at commit time, SQL Server commits transactions with at most sequential log file on disk at commit time, On high throughput systems, one sequential writes typically group commit multiple transactions.  Data read by the transaction remains as shared memory so that other transactions may access that data without reading it again from disk.  Since fast commits write all data necessary to the recovery to the log file, modified blocks are written back to the database independently of the transaction commit, when written from memory to disk
Posted on 09:58 | Categories:

ADO.NET Overview


ADO.NET OVERVIEW

ADO.NET is an evolution of the ADO data access model that directly addresses user requirements for developing scalable applications. It was designed specifically for the web with scalability, statelessness, and XML in mind.
ADO.NET uses some ADO objects, such as the Connection and Command objects, and also introduces new objects. Key new ADO.NET objects include the DataSet, DataReader, and DataAdapter.

The important distinction between this evolved stage of ADO.NET and previous data architectures is that there exists an object -- the DataSet -- that is separate and distinct from any data stores. Because of that, the DataSet functions as a standalone entity. You can think of the DataSet as an always disconnected recordset that knows nothing about the source or destination of the data it contains. Inside a DataSet, much like in a database, there are tables, columns, relationships, constraints, views, and so forth.

A DataAdapter is the object that connects to the database to fill the DataSet. Then, it connects back to the database to update the data there, based on operations performed while the DataSet held the data. In the past, data processing has been primarily connection-based. Now, in an effort to make multi-tiered apps more efficient, data processing is turning to a message-based approach that revolves around chunks of information. At the center of this approach is the DataAdapter, which provides a bridge to retrieve and save data between a DataSet and its source data store. It accomplishes this by means of requests to the appropriate SQL commands made against the data store.
The XML-based DataSet object provides a consistent programming model that works with all models of data storage: flat, relational, and hierarchical. It does this by having no 'knowledge' of the source of its data, and by representing the data that it holds as collections and data types. No matter what the source of the data within the DataSet is, it is manipulated through the same set of standard APIs exposed through the DataSet and its subordinate objects.
While the DataSet has no knowledge of the source of its data, the managed provider has detailed and specific information. The role of the managed provider is to connect, fill, and persist the DataSet to and from data stores. The OLE DB and SQL Server .NET Data Providers (System.Data.OleDb and System.Data.SqlClient) that are part of the .Net Framework provide four basic objects: the Command, Connection, DataReader and DataAdapter. In the remaining sections of this document, we'll walk through each part of the DataSet and the OLE DB/SQL Server .NET Data Providers explaining what they are, and how to program against them.
The following sections will introduce you to some objects that have evolved, and some that are new. These objects are:

·         Connections. For connection to and managing transactions against a database.
·         Commands. For issuing SQL commands against a database.
·         DataReaders. For reading a forward-only stream of data records from a SQL Server data source.
·         DataSets. For storing, Remoting and programming against flat data, XML data and relational data.
·         DataAdapters. For pushing data into a DataSet, and reconciling data against a database.
When dealing with connections to a database, there are two different options: SQL Server .NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider (System.Data.OleDb). In these samples we will use the SQL Server .NET Data Provider. These are written to talk directly to Microsoft SQL Server. The OLE DB .NET Data Provider is used to talk to any OLE DB provider (as it uses OLE DB underneath).

Connections:
Connections are used to 'talk to' databases, and are represented by provider-specific classes such as SqlConnection. Commands travel over connections and resultsets are returned in the form of streams which can be read by a DataReader object, or pushed into a DataSet object.

Commands:
Commands contain the information that is submitted to a database, and are represented by provider-specific classes such as SqlCommand. A command can be a stored procedure call, an UPDATE statement, or a statement that returns results. You can also use input and output parameters, and return values as part of your command syntax. The example below shows how to issue an INSERT statement against the Northwind database.

DataReaders:

The DataReader object is somewhat synonymous with a read-only/forward-only cursor over data. The DataReader API supports flat as well as hierarchical data. A DataReader object is returned after executing a command against a database. The format of the returned DataReader object is different from a recordset. For example, you might use the DataReader to show the results of a search list in a web page.

DATASETS AND DATAADAPTERS:

DataSets
The DataSet object is similar to the ADO Recordset object, but more powerful, and with one other important distinction: the DataSet is always disconnected. The DataSet object represents a cache of data, with database-like structures such as tables, columns, relationships, and constraints. However, though a DataSet can and does behave much like a database, it is important to remember that DataSet objects do not interact directly with databases, or other source data. This allows the developer to work with a programming model that is always consistent, regardless of where the source data resides. Data coming from a database, an XML file, from code, or user input can all be placed into DataSet objects. Then, as changes are made to the DataSet they can be tracked and verified before updating the source data. The GetChanges method of the DataSet object actually creates a second DatSet that contains only the changes to the data. This DataSet is then used by a DataAdapter (or other objects) to update the original data source.
The DataSet has many XML characteristics, including the ability to produce and consume XML data and XML schemas. XML schemas can be used to describe schemas interchanged via WebServices. In fact, a DataSet with a schema can actually be compiled for type safety and statement completion.

DATA ADAPTERS (OLEDB/SQL)

The DataAdapter object works as a bridge between the DataSet and the source data. Using the provider-specific SqlDataAdapter (along with its associated SqlCommand and SqlConnection) can increase overall performance when working with a Microsoft SQL Server databases. For other OLE DB-supported databases, you would use the OleDbDataAdapter object and its associated OleDbCommand and OleDbConnection objects.
The DataAdapter object uses commands to update the data source after changes have been made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command; using the Update method calls the INSERT, UPDATE or DELETE command for each changed row. You can explicitly set these commands in order to control the statements used at runtime to resolve changes, including the use of stored procedures. For ad-hoc scenarios, a CommandBuilder object can generate these at run-time based upon a select statement. However, this run-time generation requires an extra round-trip to the server in order to gather required metadata, so explicitly providing the INSERT, UPDATE, and DELETE commands at design time will result in better run-time performance.
1.   ADO.NET is the next evolution of ADO for the .Net Framework.
2.   ADO.NET was created with n-Tier, statelessness and XML in the forefront. Two new objects, the DataSet and DataAdapter, are provided for these scenarios.
3.   ADO.NET can be used to get data from a stream, or to store data in a cache for updates.
4.   There is a lot more information about ADO.NET in the documentation.
5.   Remember, you can execute a command directly against the database in order to do inserts, updates, and deletes. You don't need to first put data into a DataSet in order to insert, update, or delete it.
Also, you can use a DataSet to bind to the data, move through the data, and navigate data relationships
Posted on 09:55 | Categories:

Software Requirement Specification


SOFTWARE REQUIREMENT SPECIFICATION

DEVELOPERS RESPONSIBILITIES OVERVIEW:

The developer is responsible for:
Developing the system, which meets the SRS and solving all the requirements of the system?
·         Demonstrating the system and installing the system at client's location after the acceptance testing is successful.
·         Submitting the required user manual describing the system interfaces to work on it and also the documents of the system.
·         Conducting any user training that might be needed for using the system.
·         Maintaining the system for a period of one year after installation. 

 FUNCTIONAL REQUIREMENTS:

Following is a list of functionalities of the system.
This is made possible by prompting each user to enter his user-id and password before he can send or view his mails. This project has Inbox, compose and outbox.

This system should provide the administrator with the convenience such as adding a new agent, view and manage the information about the agents, view the following reports by day-wise, weekly, or monthly:

·         Employees view the mail and he composes or sends he mail to others.
·         It has Inbox, Outbox and Compose mail.

This system should help the users by providing the details online and provides a facility to search the employee records based on various options like based on the type of users, location of the employee. It should allow the users to set alert messages to the employees for the changes in the following rules etc. The users should be able to send messages through mails. The users should be able to generate the following reports:
·         Number of users processed in specified interval of time.
·         The storage of data becomes less  and the daily wise reports for amount and customer details

This system should include support for the users to view their details, view the information catalog and search facility for all available users.

     Non-Functional Requirements:
The system should be web-based system. Each user should have a user account. The system should ask the username and password to users. It doesn’t permit to unregistered user to access for Integrated Claim Settlement Services. The system should have Role based System functions access. Approval Process has to be defined. The system should have Modular customization components so that they can be reused across the implementation

These are the mainly following:

  • Secure access of confidential data (user’s details). SSL (Secure Sockets Layer) can be used.
  • 24 X 7 availability
  • Better component design to get better performance at peak time
  • Flexible service based architecture will be highly desirable for future extension

1. Performance
They understand the importance of timing, of getting there before the competition.  A rich portfolio of reusable, modular frameworks helps jump-start projects.  Tried and tested methodology ensures that we follow a predictable, low - risk path to achieve results.  Our track record is testimony to complex projects delivered within and evens before schedule.

  2. Security
Its  provides more security by setting username and password.

  3. Safety
This application provides more safety to the users for accessing the databases and for performing the operations on the databases.

   4. Interfaces
It provides the interface for accessing the database and also allows the user to do the manipulations on the databases.

  5. Reliability
This entire project is depends on the SQL Server.

6. Accuracy
Since the same table is created at different users account, the
Possibility of retrieving data wrongly increases. Also if the data is more,
Validations become difficult. This may result in loss of accuracy of data.

  7. Ease of Use
Ever user should be comfortable of working with computer and internet browsing. He must have basic knowledge of English.

8. Interoperability
This provides the import and export facilities for sending one database to another database.

9. Maintainability
The key to reducing need for maintenance, while working, if possible to do essential tasks.
1.   More accurately defining user requirement during system development.
2.   Assembling better systems documentation.
3.   Using more effective methods for designing, processing, login and communicating information with project team members.
4.   Making better use of existing tools and techniques.
5.   Managing system engineering process effectively.

10. Testability
   Testing is done in various ways such as testing the algorithm, programming code; sample data debugging is also one of following the above testing.

   11. Design Constraints
During system testing the system is used experimentally used to ensure that the software does not fail, i.e., it will run according to its specification and in the way the users expect.  Special test data are input for processing and the results examined.  A limited number of users may be allowed to use the system to see whether they try to use it in unforeseen ways.  It is preferable to discover any surprises before the organization implements the system.

 12. Cost Estimates

13. Preliminary Estimates.
  The project is decomposed into major structural     systems   or production equipment items, e.g. the entire floor of a building or a cooling system for a processing plant.

  14. Detailed Estimates.
 The project is decomposed into components of various major systems, i.e., a single floor panel for a building or a heat exchanger for a cooling system.

 15. Engineer's Estimates.
 The project is decomposed into detailed items of various components as warranted by the available cost data. Examples of detailed items are slabs and beams in a floor panel, or the piping and connections for a heat exchanger.

16. Development Platform
   The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet. The .NET Framework is designed to fulfill the following objectives:
  • To provide a consistent object-oriented programming environment whether object code is stored and executed locally, executed locally but Internet-distributed, or executed remotely.
  • To provide a code-execution environment that minimizes software deployment and versioning conflicts.
  • To provide a code-execution environment that guarantees safe execution of code, including code created by an unknown or semi-trusted third party.
  • To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments.
  • To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications.
To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code.

17. Acceptance Criteria Procedures
         
       The “SQL Client” has been successfully completed. The goal of the system is achieved and problems are solved.
  • The project has been appreciated by all the users in the organization.
  • It is easy to use, since it uses the GUI provided in the user dialog.
  • User friendly screens are provided.
  • The usage of software increases the efficiency, decreases the effort.
  • It has been efficiently employed as a Remote Database Access Tool.
  • Administrator.
  • It has been thoroughly tested and implemented.
Posted on 09:50 | Categories: