Sara Hussein Mohammed Faqeeh
In the previous monitoring and registration procedures done manually while the site provided it is automatically cross-database in which each class has been distributed record. It takes a lot of time and causes many errors. Further maintaining records of items, facilities availability for student, we are offering this proposal of recording system.
By using this software through internet. This project provides and checks all sorts of constraints so that user does give only useful data and thus validation is done in an effective way.
TABLE OF CONTENTS
Project Overview Statement……………………………………….…………..
Project Goals and Objectives……………………………………….………….
Limitations and Constraints……………………………………………………
PREVIOUS WORKS AND LITERATURE REVIEW………………….……
Purpose of the Feasibility Study…………………………………….….……..
Justification for the Proposed System……………………………….….……..
Work Breakdown Structure…………………………………………….……..
Activity and Task List…………………………………………….….………..
SRS – SYSTEM REQUIREMENTS SPECIFICATION…..…………………
System Requirements Analysis………………….………………….…………
Hardware Requirements Specification………………………………………..
Software Requirements Specification…………………………………………
Other Requirements Specifications……………………………………………
Application Architecture Design………………………………………………
User Interface Design……………………………………….…………………
REAL SCREEN SHOTS OF DATA ENTRY FORMS AND REPORTS……
CODING OF IMPRTANT FUNCTIONS AND PROCEDURES OF THE SOFTWARE……………………………………………………………………….
Day after day the world is evolving faster, especially in the field of communication through this our enterprise communications technologies we will introduce the idea of sending short messages through Web site.
Keywords: Sending SMS using .NET through a web service, Centralized Database, Web-Based Application.
This article is about sending SMS using .NET through web service. With the help of this article we can easily understand how to use web service in .NET and also you will get some information about SMS, GPRS etc. This web service will help you to give the notification if the service unable to deliver.
Project Objective and Goals:
The main objective of the project is to create a Web site to enable its users to send SMS through web pages using techniques.
Availability of short message service without the need for a mobile to the largest possible number of users more safely and privacy through the site.
The scope of the project is not limited to a certain class of society or a specific target, but serves all strata of society without restricting the availability of a network contact is important is the availability of a mobile Internet service.
Tools/Technologies to be used in Project:
Our tools in the development of this site is to design web pages from Microsoft environment, a Microsoft Visual Studio 10.0, as well as take advantage of the GPRS service (General Packet Radio Service) codes and programming language C #.
The following are some constrains in the proposed system:
Internet Connection with an average speed is needed.
Login and password is used for identification of user.
Human dependence on Data Entry (wrong data entry may lead to serious consequences
In the proposed system the following assumptions are made:
Administrator user is already created in the system.
Roles to the users of different level users are predefined.
7 Feasibility Study
Once the problem is clearly understood, the next step is to feasibility study, which is high-level capsule version of the entered system and design process. The objective is to determine whether or not the proposed system is feasible. The three test of feasibility have been carried out.
7.1 Economic feasibility
Economic feasibility attempts 2 weigh the costs of developing and implementing new system, against the benefits that would accrue from having the new system in place. This feasibility study gives the top management the economic justification for the new system.
A simple economic analysis which gives the actual comparison of costs and benefits are much more meaningful in this case. In addition, this proves to be useful point of reference to compare actual costs as the project progresses. There could be various types of intangible benefits on account of automation. These could include increased customer satisfaction, improvement in product quality better decision making timeliness of information, expediting activities, improved accuracy of operations, better documentation and record keeping, faster retrieval of information, better employee morale.
7.2 Operational Feasibility
Proposed project is beneficial only if it can be turned into information systems that will meet the organizations operating requirements. Simply stated, this test of feasibility asks if the system will work when it is developed and installed. Are there major barriers to Implementation? Here are questions that will help test the operational feasibility of a project:
Is there sufficient support for the project from management from users? If the current system is well liked and used to the extent that persons will not be able to see reasons for change, there may be resistance.
Are the current business methods acceptable to the user? If they are not, Users may welcome a change that will bring about a more operational and useful systems.
7.3 Technical Feasibility
Evaluating the technical feasibility is the trickiest part of a feasibility study. This is because, .at this point in time, not too many detailed design of the system, making it difficult to access issues like performance, costs on (on account of the kind of technology to be deployed) etc. A number of issues have to be considered while doing a technical analysis.
Understand the different technologies involved in the proposed system before commencing the project we have to be very clear about what are the technologies that are to be required for the development of the new system. Find out whether the organization currently possesses the required technologies. Is the required technology available with the organization?
8. PROJECT PLAN
A project plan, according to the Project Management Body of Knowledge (PMBOK), is: "...a formal, approved document used to guide both project execution and project control. The primary uses of the project plan are to document planning assumptions and decisions, facilitate communication among project stakeholders, and document approved scope, cost, and schedule baselines. A project plan may be summarized or detailed."
The latest edition of the PMBOK (v5) uses the term Project Charter to refer to the contract or document that the project sponsor and project manager use to agree on the initial vision of the project (scope, baseline, resources, objectives...) at a high level. The project management plan is the document that the project manager builds to describe in more details the planning of the project and its organization. In the PMI methodology described in the PMBOK v5, the project charter and the project management plan are the two most important documents for describing a project during the initiation and planning phases Description of PMBOK v5 project documents.
8.1 WORK BREACKDOWN STRUCTURE (WBS)
A work breakdown structure is a key project deliverable that organizes the team's work into manageable sections. The Project Management Body of Knowledge (PMBOK) defines the work breakdown structure as a "deliverable oriented hierarchical decomposition of the work to be executed by the project team." The work breakdown structure visually defines the scope into manageable chunks that a project team can understand, as each level of the work breakdown structure provides further definition and detail. Figure 1(below) depicts a sample work breakdown structure with three levels defined.
The project team creates the project work breakdown structure by identifying the major functional deliverables and subdividing those deliverables into smaller systems and sub-deliverables. These sub-deliverables are further decomposed until a single person can be assigned. At this level, the specific work packages required to produce the sub- deliverable are identified and grouped together. The work package represents the list of tasks or "to-dos" to produce the specific unit of work. If you've seen detailed project schedules, then you'll recognize the tasks under the work package as the "stuff" people need to complete by a specific time and within a specific level of effort.
From a cost perspective, these work packages are usually grouped and assigned to a specific department to produce the work. These departments, or cost accounts, are defined in an organizational breakdown structure and are allocated a budget to produce the specific deliverables. By integrating the cost accounts from the organizational breakdown structure and the project's work breakdown structure, the entire organization can track financial progress in addition to project performance.
8.2 GANNT CHART:
8.3 ACTIVITY AND TASK LIST:
Sending SMS using .NET through a Web service
Study & Training
8.4 Work Breakdown Structure?
The work breakdown structure has a number of benefits in addition to defining and organizing the project work. A project budget can be allocated to the top levels of the work breakdown structure, and department budgets can be quickly calculated based on the each project's work breakdown structure. By allocating time and cost estimates to specific sections of the work breakdown structure, a project schedule and budget can be quickly developed. As the project executes, specific sections of the work breakdown structure can be tracked to identify project cost performance and identify issues and problem areas in the project organization. For more information about Time allocation, see the 100% Rule.
Project work breakdown structures can also be used to identify potential risks in a given project. If a work breakdown structure has a branch that is not well defined then it represents a scope definition risk. These risks should be tracked in a project log and reviewed as the project executes. By integrating the work breakdown structure with an organizational breakdown structure, the project manager can also identify communication points and formulate a communication plan across the project organization.
When a project is falling behind, referring the work breakdown structure will quickly identify the major deliverables impacted by a failing work package or late sub- deliverable. The work breakdown structure can also be color coded to represent sub- deliverable status. Assigning colors of red for late, yellow for at risk, green for on-target, and blue for completed deliverables is an effective way to produce a heat-map of project progress and draw management's attention to key areas of the work breakdown structure.
Work Breakdown Structure Guidelines
The following guidelines should be considered when creating a work breakdown structure:
The top level represents the final deliverable or project
Sub-deliverables contain work packages that are assigned to a organization’s department or unit
All elements of the work breakdown structure don’t need to be defined to the same level
The work package defines the work, duration, and costs for the tasks required to produce the sub-deliverable
Work packages should not exceed 10 days of duration
Work packages should be independent of other work packages in the work breakdown structure
Work packages are unique and should not be duplicated across the work breakdown structure
Tools to Create a Work Breakdown Structure
Creating a Work Breakdown Structure is a team effort and is the culmination of multiple inputs and perspectives for the given project. One effective technique is to organize a brainstorming session with the various departments that will be involved with the project. Project teams can use low-technology tools like a white board, note cards, or sticky note pads to identify major deliverables, sub-deliverables, and specific work packages. These cards can be taped to a wall and reorganized as the team discusses the major deliverables and work packages involved in the project.
The low-technology approach is easy to do; however, it does not work well with distributed teams or translate easily into an electronic format. There are several tools available that support mind mapping, brainstorming, and work breakdown structures. Match Ware Mind View is an easy-to-use mind mapping software package that supports work breakdown structures, project outlines, Gantt, and exports easily into Microsoft Project for further schedule definition. Figure 3 provides an example of a work breakdown structure using Match ware Mind View.
RSR SYSTEM REQUIREMENT SPECIFICATION
9. RSR SYSTEM REQUIREMENT SPECIFICATION
A System Requirements Specification (abbreviated System Requirements Specification when need to be distinct from a Software Requirements Specification SRS) is a structured collection of information that embodies the requirements of a system.
A business analyst, sometimes titled system analyst, is responsible for analyzing the business needs of their clients and stakeholders to help identify business problems and propose solutions. Within the systems development life cycle domain, the BA typically performs a liaison function between the business side of an enterprise and the information technology department or external service providers.
9.1 SYSTEM REQUIREMENT ANALYSIS
Requirements analysis in systems engineering and software engineering, encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product or project, taking account of the possibly conflicting requirements of the various stakeholders, analyzing, documenting, validating and managing software or system requirements.
Requirements analysis is critical to the success or failure of a systems or software project. The requirements should be documented, actionable, measurable, testable, traceable, related to identified business needs or opportunities, and defined to a level of detail sufficient for system design.
9.2 HARDWARE REQUIREMENT SPECIFICATION
HARDWARE REQUIREMENT FOR DEVELOPMENT
Pentium 4 and above
2.4 GHz and above
9.3 SOFTWARE REQUIREMENT SPECIFICATION
SOFTWARE REQUIREMENT FOR DEVELOPMENT
ASP .Net (C#.NET)
For front End Design
MS SQL 2008
For Database Task
For Testing Web Pages in Various IE
Platform for performing all these tasks
For running Website In a System
Visual Studio 2010
HTML, an initialize of Hypertext Markup Language, is the predominant markup language for web pages. It provides a means to describe the structure of text-based information in a document — by denoting certain text as headings, paragraphs, lists, and so on — and to supplement that text with interactive forms, embedded images, and other objects. HTML is written in the form of labels (known as tags), surrounded by angle brackets. HTML can also describe, to some degree, the appearance and semantics of a document, and can include embedded scripting language code which can affect the behavior of web browsers and other HTML processors.
HTML is also often used to refer to content of the MIME type text/html or even more broadly as a generic term for HTML whether in its XML-descended form (such as XHTML 1.0 and later) or its form descended directly from SGML
Hyper Text Markup Language
Hypertext Markup Language (HTML), the languages of the World Wide Web (WWW), allows users to produces Web pages that include text, graphics and pointer to other Web pages (Hyperlinks).
HTML is not a programming language but it is an application of ISO Standard 8879, SGML (Standard Generalized Markup Language), but specialized to hypertext and adapted to the Web. The idea behind Hypertext is that instead of reading text in rigid linear structure, we can easily jump from one point to another point. We can navigate through the information based on our interest and preference. A markup language is simply a series of elements, each delimited with special characters that define how text or other items enclosed within the elements should be displayed. Hyperlinks are underlined or emphasized works that load to other documents or some portions of the same document.
HTML can be used to display any type of document on the host computer, which can be geographically at a different location. It is a versatile language and can be used on any platform or desktop.
HTML provides tags (special codes) to make the document look attractive. HTML tags are not case-sensitive. Using graphics, fonts, different sizes, color, etc., can enhance the presentation of the document. Anything that is not a tag is part of the document itself.
Basic HTML Tags:
Creates hypertext links
Formats text as bold
Formats text in large font.
… Contains all tags and text in the HTML document
Definition of a term
Creates definition list
Formats text with a particular font
Encloses a fill-out form
... Defines a particular frame in a set of frames
Creates headings of different levels( 1 – 6 )
... Contains tags that specify information about a document
... Creates a horizontal rule
… Contains all other HTML tags
... Provides meta-information about a document
Contains client-side or server-side script
Creates a table
Indicates table data in a table
Designates a table row
Creates a heading in a table
The attributes of an element are name-value pairs, separated by "=", and written within the start label of an element, after the element's name. The value should be enclosed in single or double quotes, although values consisting of certain characters can be left unquoted in HTML (but not XHTML).Leaving attribute values unquoted is considered unsafe.
Most elements take any of several common attributes: id, class, style and title. Most also take language-related attributes: lang and dir.
The id attribute provides a document-wide unique identifier for an element. This can be used by stylesheets to provide presentational properties, by browsers to focus attention on the specific element or by scripts to alter the contents or presentation of an element. The class attribute provides a way of classifying similar elements for presentation purposes. For example, an HTML document (or a set of documents) may use the designation class="notation" to indicate that all elements with this class value are all subordinate to the main text of the document (or documents). Such notation classes of elements might be gathered together and presented as footnotes on a page, rather than appearing in the place where they appear in the source HTML.
An author may use the style non-attributal codes presentational properties to a particular element. It is considered better practice to use an element’s son- id page and select the element with a stylesheet, though sometimes this can be too cumbersome for a simple ad hoc application of styled properties. The title is used to attach subtextual explanation to an element. In most browsers this title attribute is displayed as what is often referred to as a tooltip. The generic inline span element can be used to demonstrate these various non-attributes.
The preceding displays as HTML (pointing the cursor at the abbreviation should display the title text in most browsers).
A HTML document is small and hence easy to send over the net. It is small because it does not include formatted information.
HTML is platform independent.
HTML tags are not case-sensitive.
Validate the contents of a form and make calculations.
Add scrolling or changing messages to the Browser’s status line.
Animate images or rotate images that change when we move the mouse over them.
Detect the browser in use and display different content for different browsers.
Detect installed plug-ins and notify the user if a plug-in is required.
It is more flexible than VBScript.
ASP.NET is a Web application framework developed and marketed by Microsoft to allow programmers to build dynamic Web sites, Web applications and Web services. It was first released in January 2002 with version 1.0 of the .NET Framework, and is the successor to Microsoft's Active Server Pages (ASP) technology. ASP.NET is built on the Common Language Runtime (CLR), allowing programmers to write ASP.NET code using any supported .NET language. The ASP.NET SOAP extension framework allows ASP.NET components to process SOAP messages.
After the release of Internet Information Services 4.0 in 1997, Microsoft began researching possibilities for a new Web application model that would solve common complaints about ASP, especially with regard to separation of presentation and content and being able to write "clean" code. Mark Anders, a manager on the IIS team, and Scott Guthrie, who had joined Microsoft in 1997 after graduating from Duke University, were tasked with determining what that model would look like. The initial design was developed over the course of two months by Anders and Guthrie, and Guthrie coded the initial prototypes during the Fall of 1997.
The initial prototype was called "XSP"; Guthrie explained in a 2007 interview that, "People would always ask what the X stood for. At the time it really didn't stand for anything. XML started with that; XSLT started with that. Everything cool seemed to start with an X, so that's what we originally named it. The initial prototype of XSP was done using Java; but it was soon decided to build the new platform on top of the Common Language Runtime (CLR), as it offered an object-oriented programming environment, garbage collection and other features that were seen as desirable features that Microsoft's Component Object Model platform did not support. Guthrie described this decision as a "huge risk", as the success of their new Web development platform would be tied to the success of the CLR, which, like XSP, was still in the early stages of development, so much so that the XSP team was the first team at Microsoft to target the CLR.
With the move to the Common Language Runtime, XSP was re-implemented in C# (known internally as "Project Cool" but kept secret from the public), and the name changed to ASP+, as by this point the new platform was seen as being the successor to Active Server Pages, and the intention was to provide an easy migration path for ASP developers.
Mark Anders first demonstrated ASP+ at the ASP Connections conference in Phoenix, Arizona on May 2, 2000. Demonstrations to the wide public and initial beta release of ASP+ (and the rest of the .NET Framework) came at the 2000 Professional Developers Conference on July 11, 2000 in Orlando, Florida. During Bill Gates' keynote presentation, Fujitsu demonstrated ASP+ being used in conjunction with COBOL, and support for a variety of other languages was announced, including Microsoft's new Visual Basic .NET and C# languages, as well as Python and Perl support by way of interoperability tools created by Active State.
Once the ".NET" branding was decided on in the second half of 2000, it was decided to rename ASP+ to ASP.NET. Mark Anders explained on an appearance on The MSDN Show that year that, "The .NET initiative is really about a number of factors, it's about delivering software as a link building service, it's about XML and Web services and really enhancing the Internet in terms of what it can do ... we really wanted to bring its name more in line with the rest of the platform pieces that make up the .NET framework.
After four years of development, and a series of beta releases in 2000 and 2001, ASP.NET 1.0 was released on January 5, 2002 as part of version 1.0 of the .NET Framework. Even prior to the release, dozens of books had been written about ASP.NET, and Microsoft promoted it heavily as part of its platform for Web services. Guthrie became the product unit manager for ASP.NET, and development continued apace, with version 1.1 being released on April 24, 2003 as a part of Windows Server 2003. This release focused on improving ASP.NET's support for mobile devices.
C# with ASP.NET
-C# is a simple, modern, object oriented language derived from VB6.
- It is a part of Microsoft Dot Net Visual Studio.
- Visual studio supports Vb, VC++, C++, Vbscript, Jscript. All of these languages provide access to the Microsoft .NET platform.
- Dot NET includes a Common Execution engine and a rich class library.
- Microsoft's JVM equiv. is Common language Run time (CLR).
- CLR accommodates more than one languages such as C#, C#, Jscript, ASP.NET, C++.
- Source code --->Intermédiation Langage code (IL) ---> (JIT Compiler) Native code
-The classes and data types are common to all of the .NET languages.
- We may develop Console application, Windows application, and Web application using C#
Structured Query Language (SQL) is the language used to manipulate relational databases. SQL is tied very closely with the relational model.
In the relational model, data is stored in structures called relations or tables.
SQL statements are issued for the purpose of:
Data definition: Defining tables and structures in the database (DDL used to create, alter and drop schema objects such as tables and indexes).
Data manipulation: Used to manipulate the data within those schema objects (DML Inserting, Updating, Deleting the data, and Querying the Database).
A schema is a collection of database objects that can include: tables, views, indexes and sequences
List of SQL statements that can be issued against an Oracle database schema are:
ALTER - Change an existing table, view or index definition (DDL)
AUDIT - Track the changes made to a table (DDL)
COMMENT - Add a comment to a table or column in a table (DDL)
COMMIT - Make all recent changes permanent (DML - transactional)
CREATE - Create new database objects such as tables or views (DDL)
DELETE - Delete rows from a database table (DML)
DROP - Drop a database object such as a table, view or index (DDL)
GRANT - Allow another user to access database objects such as tables or views (DDL)
INSERT - Insert new data into a database table (DML)
No AUDIT - Turn off the auditing function (DDL)
REVOKE - Disallow a user access to database objects such as tables and views (DDL)
ROLLBACK - Undo any recent changes to the database (DML - Transactional)
SELECT - Retrieve data from a database table (DML)
TRUNCATE - Delete all rows from a database table (can not be rolled back) (DML)
UPDATE - Change the values of some data items in a database table (DML)
MS SQL Server:
Microsoft SQL Server is a relational database server, developed by Microsoft: It is a software product whose primary function is to store and retrieve data as requested by other software applications, be it those on the same computer or those running on another computer across a network (including the Internet). There are at least a dozen different editions of Microsoft SQL Server aimed at different audiences and for different workloads (ranging from small applications that store and retrieve data on the same computer, to millions of users and computers that access huge amounts of data from the Internet at the same time).True to its namesake, Microsoft SQL Server's primary query languages are T-SQL and ANSI SQL.
Prior to version 7.0 the code base for MS SQL Server was sold by Sybase SQL Server to Microsoft, and was Microsoft's entry to the enterprise-level database market, competing against Oracle, IBM, and, later, Sybase. Microsoft, Sybase and Ashton-Tate originally teamed up to create and market the first version named SQL Server 1.0 for OS/2 (about 1989) which was essentially the same as Sybase SQL Server 3.0 on Unix, VMS, etc. Microsoft SQL Server 4.2 was shipped around 1992 (available bundled with IBM OS/2 version 1.3). Later Microsoft SQL Server 4.21 for Windows NT was released at the same time as Windows NT 3.1. Microsoft SQL Server v6.0 was the first version designed for NT, and did not include any direction from Sybase.
About the time Windows NT was released, Sybase and Microsoft parted ways and each pursued their own design and marketing schemes. Microsoft negotiated exclusive rights to all versions of SQL Server written for Microsoft operating systems. Later, Sybase changed the name of its product to Adaptive Server Enterprise to avoid confusion with Microsoft SQL Server. Until 1994, Microsoft's SQL Server carried three Sybase copyright notices as an indication of its origin.
Since parting ways, several revisions have been done independently. SQL Server 7.0 was a rewrite from the legacy Sybase code. It was succeeded by SQL Server 2000, which was the first edition to be launched in a variant for the IA-64 architecture.
In the ten years since release of Microsoft's previous SQL Server product (SQL Server 2000), advancements have been made in performance, the client IDE tools, and several complementary systems that are packaged with SQL Server 2005. These include: an ETL tool (SQL Server Integration Services or SSIS), a Reporting Server, an OLAP and data mining server (Analysis Services), and several messaging technologies, specifically Service Broker and Notification Services.
SQL Server 2005
SQL Server 2005 (codename Yukon), released in October 2005, is the successor to SQL Server 2000. It included native support for managing XML data, in addition to relational data. For this purpose, it defined an xml data type that could be used either as a data type in database columns or as literals in queries. XML columns can be associated with XSD schemas; XML data being stored is verified against the schema. XML is converted to an internal binary data type before being stored in the database. Specialized indexing methods were made available for XML data. XML data is queried using XQuery; Common Language Runtime (CLR) integration was a main feature with this edition, enabling one to write SQL code as Managed Code by the CLR. SQL Server 2005 added some extensions to the T-SQL language to allow embedding XQuery queries in T-SQL. In addition, it also defines a new extension to XQuery, called XML DML, that allows query-based modifications to XML data. SQL Server 2005 also allows a database server to be exposed over web services using Tabular Data Stream (TDS) packets encapsulated within SOAP (protocol) requests. When the data is accessed over web services, results are returned as XML
For relational data, T-SQL has been augmented with error handling features (try/catch) and support for recursive queries with CTEs (Common Table Expressions). SQL Server 2005 has also been enhanced with new indexing algorithms, syntax and better error recovery systems. Data pages are check summed for better error resiliency, and optimistic concurrency support has been added for better performance. Permissions and access control have been made more granular and the query processor handles concurrent execution of queries in a more efficient way. Partitions on tables and indexes are supported natively, so scaling out a database onto a cluster is easier. SQL CLR was introduced with SQL Server 2005 to let it integrate with the .NET Framework
SQL Server 2005 introduced "MARS" (Multiple Active Results Sets), a method of allowing usage of database connections for multiple purposes.SQL Server 2005 introduced DMVs (Dynamic Management Views), which are specialized views and functions that return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance.
SQL Server 2005 introduced Database Mirroring, but it was not fully supported until the first Service Pack release (SP1). In the initial release (RTM) of SQL Server 2005, database mirroring was available, but unsupported. In order to implement database mirroring in the RTM version, you had to apply trace flag 1400 at startup. Database mirroring is a high availability option that provides redundancy and failover capabilities at the database level. Failover can be performed manually or can be configured for automatic failover. Automatic failover requires a witness partner and an operating mode of synchronous (also known as high-safety or full safety).
SQL Server 2008
he next version of SQL Server, SQL Server 2008 was released (RTM) on August 6, 2008 and aims to make data management self-tuning, self-organizing, and self-maintaining with the development of SQL Server Always On technologies, to provide near-zero downtime. SQL Server 2008 also includes support for structured and semi-structured data, including digital media formats for pictures, audio, video and other multimedia data. In current versions, such multimedia data can be stored as BLOBs (binary large objects), but they are generic bit streams. Intrinsic awareness of multimedia data will allow specialized functions to be performed on them. According to Paul Flessner, senior Vice President, Server Applications, Microsoft Corp., SQL Server 2008 can be a data storage backend for different varieties of data: XML, email, time/calendar, file, document, spatial, etc as well as perform search, query, analysis, sharing, and synchronization across all data types
Other new data types include specialized date and time types and a Spatial data type for location-dependent data Better support for unstructured and semi-structured data is provided using the new FILESTREAM data type, which can be used to reference any file stored on the file system Structured data and metadata about the file is stored in SQL Server database, whereas the unstructured component is stored in the file system. Such files can be accessed both via Win32 file handling APIs as well as via SQL Server using T-SQL; doing the latter accesses the file data as a BLOB. Backing up and restoring the database backs up or restores the referenced files as well SQL Server 2008 also natively supports hierarchical data, and includes T-SQL constructs to directly deal with them, without using recursive queries.
The Full-text search functionality has been integrated with the database engine. According to a Microsoft technical article, this simplifies management and improves performance.
Spatial data will be stored in two types. A "Flat Earth" (GEOMETRY or planar) data type represents geospatial data which has been projected from its native, spherical, coordinate system into a plane. A "Round Earth" data type (GEOGRAPHY) uses an ellipsoidal model in which the Earth is defined as a single continuous entity which does not suffer from the singularities such as the international dateline, poles, or map projection zone "edges". Approximately 70 methods are available to represent spatial operations for the Open Geospatial Consortium Simple Features for SQL, Version 1.1.
SQL Server includes better compression features, which also helps in improving scalability. It enhanced the indexing algorithms and introduced the notion of filtered indexes. It also includes Resource Governor that allows reserving resources for certain users or workflows. It also includes capabilities for transparent encryption of data (TDE) as well as compression of backups. SQL Server 2008 supports the ADO.NET Entity Framework and the reporting tools, replication, and data definition will be built around the Entity Data Model. SQL Server Reporting Services will gain charting capabilities from the integration of the data visualization products from Dundas Data Visualization, Inc., which was acquired by Microsoft. On the management side, SQL Server 2008 includes the Declarative Management Framework which allows configuring policies and constraints, on the entire database or certain tables, declaratively. The version of SQL Server Management Studio included with SQL Server 2008 supports IntelliSense for SQL queries against a SQL Server 2008 Database Engine. SQL Server 2008 also makes the databases available via Windows PowerShell providers and management functionality available as Camlets, so that the server and all the running instances can be managed from Windows PowerShell.
ASP.net condign by C# is a modern, type safe programming language, object oriented language that enables programmers to quickly and easily build solutions for the Microsoft .NET platform. And MS SQL server is the best solution database for asp.net
Object-Oriented Analysis and Design
Object-Oriented Analysis and Design (OOAD) is a software engineering approach that models a system as a group of interacting objects. Each object represents some entity of interest in the system being modeled, and is characterized by its class, its state (data elements), and its behavior at figure 8. They hide information about the representation of the state and hence limit access to it. An object-oriented design process involves designing the object classes and the relationships between these classes. When the design is realized as an executing program, the required objects are created dynamically using the class definitions.
Figure 1: A system made up of interacting objects
Various models can be created to show the static structure, dynamic behavior, and run-time deployment of these collaborating objects. There are a number of different notations for representing these models, such as the Unified Modeling Language (UML).
Object-oriented analysis (OOA) applies object-modeling techniques to analyze the functional requirements for a system. Object-oriented design (OOD) elaborates the analysis models to produce implementation specifications. OOA focuses on what the system does, OOD on how the system does it.
Object-oriented design is part of object-oriented development where an object-oriented strategy is used throughout the development process as follows:
Object-Oriented Analysis is concerned with developing an object-oriented model of the application domain. The identified objects reflect entities and operations that are associated with the problem to be solved.
Object-Oriented Design is concerned with developing an object-oriented model of a software system to implement the identified requirements. The objects in an object-oriented design are related to the solution to the problem that is being solved. There may be close relationships between some problem objects and some solution objects but the designer inevitably has to add new objects and to transform problem objects to implement the solution.
Object-Oriented Programming is concerned with realizing a software design using an object-oriented programming language. An object-oriented programming language, such as Java, supports the direct implementation of objects and provides facilities to define object classes.
Unified Modeling Language
Unified Modeling Language (UML) is a standardized general-purpose modeling language in the field of object-oriented software engineering. The standard is managed, and was created, by the Object Management Group. It was first added to the list of OMG adopted technologies in 1997, and has since become the industry standard for modeling software-intensive systems. UML includes a set of graphic notation techniques to create visual models of object-oriented software-intensive systems.
The Unified Modeling Language (UML) is used to specify, visualize, modify, construct and document the artifacts of an object-oriented software-intensive system under development. UML offers a standard way to visualize a system's architectural blueprints, including elements such as:
Programming language statements.
Reusable software components.
UML combines techniques from data modeling (entity relationship diagrams), business modeling (work flows), object modeling, and component modeling. It can be used with all processes, throughout the software development life cycle, and across different implementation technologies. UML has synthesized the notations of the method, the Object-modeling technique (OMT) and Object-oriented software engineering (OOSE) by fusing them into a single, common and widely usable modeling language. UML aims to be a standard modeling language which can model concurrent and distributed systems. UML is standard, and is evolving under the auspices of the Object Management Group (OMG).
UML models may be automatically transformed to other representations (e.g. Java) by means of QVT-like transformation languages. UML is extensible, with two mechanisms for customization: profiles and stereotypes.
Software Requirements Engineering
The requirements for a system are the descriptions of what the system should do—the services that it provides and the constraints on its operation. These requirements reflect the needs of customers for a system that serves a certain purpose such as controlling a device, placing an order, or finding information. The process of finding out, analyzing, documenting and checking these services and constraints is called Requirements Engineering (RE).
The term ‘requirement’ is not used consistently in the software industry. In some cases, a requirement is simply a high-level, abstract statement of a service that a system should provide or a constraint on a system. At the other extreme, it is a detailed, formal definition of a system function.
You need to write requirements at different levels of detail because different readers use them in different ways. Figure 9 shows possible readers of the user and system requirements. The readers of the user requirements are not usually concerned with how the system will be implemented and may be managers who are not interested in the detailed facilities of the system. The readers of the system requirements need to know more precisely what the system will do because they are concerned with how it will support the business processes or because they are involved in the system implementation.
Figure 2: Readers of different types of requirements specificatio
In the present project work, we have used the following UML diagrams during requirements engineering process:
Use-case modeling: including both use-case diagrams and use-case descriptions.
The Use Case Model describes the proposed functionality of the new system. A Use Case represents a discrete unit of interaction between a user (human or machine) and the system. A Use Case is a single unit of meaningful work; for example login to system, register with system and create order are all Use Cases. Each Use Case has a description which describes the functionality that will be built in the proposed system. A Use Case may ‘include’ another Use Case’s functionality or ‘extend’ another Use Case with its own behavior. Use Cases are typically related to ‘actors’. An actor is a human or machine entity that interacts with the system to perform meaningful work.
Template for use-case model
Use-Case Model for Sending SMS using .Net through a web service
An activity diagram is a kind of UML diagrams that provides graphical representations of workflows of stepwise activities and actions with support for choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams can be used to describe the business and operational step-by-step workflows of components in a system. An activity diagram shows the overall flow of control.
Activity diagrams are constructed from a limited number of shapes, connected with arrows. The most important shape types:
Rounded rectangles represent activities;
Diamonds represent decisions;
Bars represent the start (split) or end (join) of concurrent activities;
A black circle represents the start (initial state) of the workflow;
An encircled black circle represents the end (final state).
Arrows run from the start towards the end and represent the order in which activities happen. Hence they can be regarded as a form of flowchart. Typical flowchart techniques lack constructs for expressing concurrency. However, the join and split symbols in activity diagrams only resolve this for simple cases; the meaning of the model is not clear when they are arbitrarily combined with decisions or loops.
Activity diagram template
Activity Diagrams for Sending SMS using .Net through a web service
Activity diagram for “Login” use-case
Adopted Software Engineering Methodology
Nowadays, adopting state-of-the-art software development methodologies means that we should adopt Object-Orientation throughout the whole Software Development Life Cycle (SDLC). So, in the present project work, we have adopted Object-Oriented Analysis and Design (OOAD) methodology. The next subsection gives a recap on this development methodology.
Software Architectural Design
What is Software Architectural Design?
Software architecture encompasses the set of significant decisions about the organization of a software system including the selection of the structural elements and their interfaces by which the system is composed; behavior as specified in collaboration among those elements; composition of these structural and behavioral elements into larger subsystems; and an architectural style that guides this organization. Software architecture also involves functionality, usability, resilience, performance, reuse, comprehensibility, economic and technology constraints, tradeoffs and aesthetic concerns .
6.1.2 Why is Software Architectural Design Important?
Systems should be designed with consideration for the user, the system (the IT infrastructure), and the business goals. For each of these areas, you should outline key scenarios and identify important quality attributes (for example, reliability or scalability) and key areas of satisfaction and dissatisfaction. Where possible, develop and consider metrics that measure success in each of these areas see figure
In database world, a table is a set of data elements (values) that are organized using a model of vertical columns (which are identified by their name) and horizontal rows. A table has a specified number of columns, but can have any number of rows. Each row is identified by the values appearing in a particular column subset which has been identified as a candidate key.
Table is another term for relations; although there is the difference in that a table is usually a multi-set (bag) of rows whereas a relation is a set and does not allow duplicates. Besides the actual data rows, tables generally have associated with them some meta-information, such as constraints on the table or on the values within particular columns. The data in a table does not have to be physically stored in the database. Views are also relational tables, but their data are calculated at query time. Another example is nicknames, which represent a pointer to a table in another database.
In the presented project, we are going to build a database that consists of a set of 3 tables as shown in tables: 1, 2, and 3.
Figure : Software architectural design targets
When building software architecture, the following set of question should be considered:
How will the users be using the application?
How will the application be deployed into production and managed?
What are the quality attribute requirements for the application, such as security, performance, concurrency, internationalization, and configuration?
How can the application be designed to be flexible and maintainable over time?
What are the architectural trends that might impact your application now or after it has been deployed?
6.1.3 Data-Centered Architectural Style
Data-centered software architecture is characterized by a centralized data store that is shared by all surrounding software components. The software system is decomposed into two major partitions: data store and independent software component or agents. The connections between the data module and the software components are implemented either by explicit method invocation or by implicit method invocation. In pure data-centered software architecture, the software components don’t communicate with each other directly; instead, all the communication is conducted via the data store. The shared data module provides all mechanisms for software components to access it, such as insertion, deletion, update, and retrieval (see figure).
Figure 2: Data-centered software architectural style
Software Detailed Design
We have here chosen to adopt Object-Oriented Design paradigm in order to model our system blueprint. An object-oriented system is made up of interacting objects that maintain their own local state and provide operations on that state. The representation of the state is private and cannot be accessed directly from outside the object. Object-oriented design processes involve designing object classes and the relationships between these classes. These classes define the objects in the system and their interactions. When the design is realized as an executing program, the objects are created dynamically from these class definitions.
Object-oriented systems are easier to change than systems developed using functional approaches. Objects include both data and operations to manipulate that data. They may therefore be understood and modified as stand-alone entities. Changing the implementation of an object or adding services should not affect other system objects. Because objects are associated with things, there is often a clear mapping between real-world entities (such as hardware components) and their controlling objects in the system. This improves the understandability, and hence the maintainability, of the design .
To develop a system design from concept to detailed, object-oriented design, there are several things that you need to do:
Understand and define the context and the external interactions with the system.
Design the system architecture.
Identify the principal objects in the system.
Develop design models.
In software design, a system can be viewed from two perspectives:
Static view: the system here is defined as a set of software objects without showing the interaction between them. Here UML class diagram can view these entities and models relationships between them.
Dynamic view: the system here is defined as a set of interacting run-time objects by showing messages between them during different system scenarios. Here UML sequence diagram (aka interaction diagram) can view these interaction and messages between system objects. For each use-case in the use-case diagram for a system there should be a corresponding sequence diagram to explain the entire scenario to accomplish functionality represented by that use-case.
10.1 APPLICATION ARCHITECTURE DESIGN
A data flow diagram is graphical tool used to describe and analyze movement of data through a system. These are the central tool and the basis from which the other components are developed. The transformation of data from input to output, through processed, may be described logically and independently of physical components associated with the system. These are known as the logical data flow diagrams. The physical data flow diagrams show the actual implements and movement of data between people, departments and workstations. A full description of a system actually consists of a set of data flow diagrams. Using two familiar notations Yourdon, Gane and Sarsen notation develops the data flow diagrams. Each component in a DFD is labeled with a descriptive name. Process is further identified with a number that will be used for identification purpose. The development of DFD’s is done in several levels. Each process in lower level diagrams can be broken down into a more detailed DFD in the next level. The lop-level diagram is often called context diagram. It consists a single process bit, which plays vital role in studying the current system. The process in the context level diagram is exploded into other process at the first level DFD.
The idea behind the explosion of a process into more process is that understanding at one level of detail is exploded into greater detail at the next level. This is done until further explosion is necessary and an adequate amount of detail is described for analyst to understand the process.
Larry Constantine first developed the DFD as a way of expressing system requirements in a graphical from, this lead to the modular design.
A DFD is also known as a “bubble Chart” has the purpose of clarifying system requirements and identifying major transformations that will become programs in system design. So it is the starting point of the design to the lowest level of detail. A DFD consists of a series of bubbles joined by data flows in the system.
In the DFD, there are four symbols
A square defines a source(originator) or destination of system data
An arrow identifies data flow. It is the pipeline through which the information flows
A circle or a bubble represents a process that transforms incoming data flow into outgoing data flows.
An open rectangle is a data store, data at rest or a temporary repository of data
In order to create DFD’s we used following symbols:
Flow of Data
1.One way data flow
2.Two way data flow
DATA FLOW DIAGRAM
A data flow diagram (DFD) is a graphical representation of the "flow" of data through an information system, modeling its process aspects. A DFD is often used as a preliminary step to create an overview of the system, which can later be elaborated.DFDs can also be used for the visualization of data processing (structured design).
A DFD shows what kind of information will be input to and output from the system, where the data will come from and go to, and where the data will be stored. It does not show information about the timing of process or information about whether processes will operate in sequence or in parallel (which is shown on a flowchart).
Data flow diagrams were proposed by Larry Constantine, the original developer of structured design, based on Martin and Estrin's "Data Flow Graph" model of computation. Starting in the 1970s, data flow diagrams (DFD) became a popular way to visualize the major steps and data involved in software system processes. DFDs were usually used to show data flows in a computer system, although they could in theory be applied to business process modeling. DFD were useful to document the major data flows or to explore a new high-level design in terms of data flow.
Data flow diagrams are also known as bubble charts. DFD is a designing tool used in the top-down approach to Systems Design. This context-level DFD is next "exploded", to produce a Level 1 DFD that shows some of the detail of the system being modeled. The Level 1 DFD shows how the system is divided into sub-systems (processes), each of which deals with one or more of the data flows to or from an external agent, and which together provide all of the functionality of the system as a whole. It also identifies internal data stores that must be present in order for the system to do its job, and shows the flow of data between the various parts of the system.
Data flow diagrams are one of the three essential perspectives of the structured-systems analysis and design method SSADM. The sponsor of a project and the end users will need to be briefed and consulted throughout all stages of a system's evolution. With a data flow diagram, users are able to visualize how the system will operate, what the system will accomplish, and how the system will be implemented. The old system's dataflow diagrams can be drawn up and compared with the new system's data flow diagrams to draw comparisons to implement a more efficient system. Data flow diagrams can be used to provide the end user with a physical idea of where the data they input ultimately has an effect upon the structure of the whole system from order to dispatch to report. How any system is developed can be determined through a data flow diagram model.
In the course of developing a set of leveled data flow diagrams the analyst/designer is forced to address how the system may be decomposed into component sub-systems, and to identify the transaction data in the data model.
Data flow diagrams can be used in both Analysis and Design phase of the SDLC.
There are different notations to draw data flow diagrams (Yourdon & Coad and Gane & Sarson), defining different visual representations for processes, data stores, data flow, and external entities
A physical DFD shows how the system is actually implemented, either at the moment (Current Physical DFD), or how the designer intends it to be in the future (Required Physical DFD). Thus, a Physical DFD may be used to describe the set of data items that appear on each piece of paper that move around an office, and the fact that a particular set of pieces of paper are stored together in a filing cabinet. It is quite possible that a Physical DFD will include references to data that are duplicated, or redundant, and that the data stores, if implemented as a set of database tables, would constitute an un-normalized (or de-normalized) relational database. In contrast, a Logical DFD attempts to capture the data flow aspects of a system in a form that has neither redundancy nor duplication.
DATA FLOW DIAGRAM OF DATA BASE AND USER
Context level diagram
Level 1 diagram
ENTITY RELATIONSHIP DIAGRAM
Entity-Relationship Diagram (ERD)
ERD is an abstract and conceptual representation of data. Entity-relationship modeling is a database modeling method, used to produce a type of conceptual schema or semantic data model of a system, often a relational database, and its requirements in a top-down fashion. There are three basic elements in ER models: 1) Entities are the "things" about which we seek information2) Attributes are the data we collect about the