13603_hw 4 cap 314

19
LOVELY PROFESSIONAL UNIVERSITY ASSIGNMENT NO. FOUR CAP – 314  PRINCIPAL OF SOFTWARE ENGINEERING SUBMITTED TO…………………. Lec. Mis. Deepti shamra Submitted by…………..  Ajay kumar Sec. e3803 a 23 Reg.. 10812645 Bca. Mca (dual in.)  PART A

Upload: ankur-singh

Post on 10-Apr-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 1/19

LOVELY PROFESSIONALUNIVERSITY

ASSIGNMENT NO.FOUR

CAP – 314 PRINCIPAL OFSOFTWARE ENGINEERING

SUBMITTED TO………………….Lec. Mis. Deepti shamra

Submitted

by………….. Ajay kumarSec. e3803 a 23Reg.. 10812645Bca. Mca (dual in.)

PART A

Page 2: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 2/19

Q1:- Explain the complete coding process for hostel management system for your university.

Ans:- Hostel Management System

The Hostel Management System is a system specially designed to centrallymanage Youth Hostels Associations. The system uses one single central database that serves the activities of the whole associations. Technically, it is a multi-tier application running from a central server. Users are using thin clients, connected to the center using phone lines or the Internet and will never need technical support and maintenance (zero client

administration).• The system was originally designed to deal with the complexity of

Youth Hostels Associations and is not a modified hotel system.

• All administrative functions and application system data has beendesigned to be kept centrally and unique for the entire organization.

• Product and services catalog

• Guest Profiles and Customers details• Pricing and Contracts

• Currencies and rates, countries, users etc.

• The system handles Youth Hostels bed reservations and complex groups as well as standard hotel room reservation.

• A complete Internet online interface allows direct bookings for both

guests and agents.

Hostel Management System advantages:

Control and pricing

Page 3: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 3/19

1. The whole association is managed from one location. The central officecontrols all the operating parameters of all the hostels.

2. One single bill of materials, customer and guest lists. Having this featurescentralized allows issuing complex organization wide statistical reports.

• Bill of materials (product catalog) – This is the foundation of the system. Every financial transaction from any hostel will have a product catalog number as one of the properties, which allows easyreport issuing.

• Product Catalog

• Customer Details – the details are common and are visible from all hostels.

• Guest Profile Details –the contact information about the guest. It isenough for a guest to make a single reservation to any of the hostels

for his/her profile to be visible in all the hostels of the association.

3. Prices and contracts with customers are association-wide. The below screen shows a quotation form, which can be issued from either the central office, or from any hostel.

Q2:- As a project manager, issue basic guidelines to your debuggers,developers in context of testing of your project.

Ans :- Guidelines for Debugging

The following guidelines provide several techniques for debuggingcode.

Required

• Learn to use debugging tools.

You must understand and master debugging tools. For moreinformation, see Debugging in Visual Studio .

• Know where your symbols are archived.

Page 4: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 4/19

Symbols for every product must be archived on symbol servers.You just need to know where to find these servers. For moreinformation, see "How to Use the Microsoft Symbol Server" in theMSDN Library .

Study and resolve bugs that hang processes.

To users, an application that stops responding (hangs) is as bad as a crash. Either way, users lose work and mus t start over.However, hangs have been thought of as being much harder tostudy and resolve. That is no longer true for a large percentageof process hangs. Use the latest tools and techniques for solvingthese problems. For more information, see "How to Troubleshoot Program Faults with Dr. Watson" in the MSDN Library .

• Know how to debug a minidump.

Most testers and customers will crash your code without thebenefit of an attached debugger. If you cannot reproduce theissue easily then all that you will have is a minidump. Learning todebug by using a minidump is essential. For more information,see "Minidump Files" in the MSDN Library .

• Know how to recover a corrupted stack.

Recovering a corrupted stack is complex, but recovering it isessential because so many real-world failures have stacks that

seem incomprehensible. For more information, see"Troubleshooting Common Problems with Applications:Debugging in the Real World" in the MSDN Library .

Avoid

• Assume test will find all the bugs.

Test will never be able to find all the bugs. It is not possible.Code is too complex. Even if test could find all the bugs, youwould never have time to fix all of them. The right thing to do is

to design your product so that bugs are not in the product fromthe start. Save yourself the trouble of fixing them later. You must take responsibility for the quality of your code. The test team

just verifies the quality of your code. Do not depend on testers toclean up your mess.

Page 5: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 5/19

Recommended

• Know how to debug multithreaded applications.

Introducing threads to a program can cause it to fail in new

ways. Everything that you do in a single-threaded environment to help debug applications become more important as thenumber of threads increases. For example, you might not alwayscatch the error at the point it occurs. Usually the error is caught later, possibly in another thread. In these situations, you cannot even walk back up the call stack to find the problem; the error was in another thread, with another stack. Being as proactive as

possible will help the debugging process in general.

• Learn how to do remote debugging.

Remote debugging occurs when you want to debug a problemthat is occurring on another computer while continuing to work from your own computer. Developers frequently do this when asegment of code runs fine on their own computer but crashes onanother system. They may want to debug it on the other systemremotely, without having to go sit in front of the other computer.For more information, see Remote Debugging Setup .

• Learn to debug on live servers.

Debugging procedures are different when you are trying to

debug code on a live server that customers are accessing. This isgetting more common as more code is written for the Web. For more information, see "Troubleshooting Common Problems with

Applications: Debugging in the Real World" in the MSDN Library .

• Comment all bug fixes.

When you fix a bug, include in the code a version number, bugID, and your alias. If someone looks at the code afterward and has a question about the fix, they can contact you for information.

• Review all bug fixes.

You should code review all fixes. Get at least one other person toexamine your code —a peer review.

• Verify subtle bug fix before check-in.

Page 6: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 6/19

Avoid fixing the same bug twice. Use a build to verify that the fix is correct, especially for subtle bugs.

• Use a test release document to report all bug fixes.

Coordinate with test team by documenting all your bug fixes in atest release document (TRD) and sending it to the test team in e-mail.

• Use Symbol Server to index and archive your product symbols.

By permitting Symbol Server to index and archive your product symbols, you make debugging from any system, includingcustomer systems, fast and easy.

Not Recommended

• Fix other people’s bugs without letting them know.

It is a wonderful practice to research and attempt to fix other people’s bugs. You get to know the code better, and you areserving as backup for other people. The only thing that youshould not do is check in the fix without letting the owner of that code know about the fix.

• Resolve a bug as "Not Reproducible" without trying the sameversion in the same environment.

You must roll back to the version of the product where the bugwas found. Do not assume that if it is not broken on the current version of the product, the bug must have been fixed. That might not be true. The code could have changed so that it just hides the bug now. If you investigate a bug until you see it break, you might actually find the root cause of the problem and fix it so the bug will not occur again on any computer

Page 7: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 7/19

Do everything you can to ensure that nothing obvious breaks before youinvolve the user. There is nothing more frustrating than finally convincing areluctant user to come to your lab, only to have the installer blow up in their

face. Likewise, make sure testers don't end up testing something you're unwilling to fix. If support is required, only test supported versions.

RemediateTo have efficient testing, you'll want to be testing with a fix in mind. Debug a failing application until you determine which remediation bucket it fitsinto; once you have a bucket, stop.Of course, to do that, testers must know which buckets you're considering,and when. Crisply define your strategy for remediation. Remediationoptions most organizations consider include:

• Get a new one. This is extremely likely to work, and offers you vendor support (which probably matters for some of your applications). Thistends to be the most expensive approach, with either development or acquisition costs. Typically, this approach is used any time you canafford it!

• Shim it. This is the cost-saving route—help the application bymodifying calls to the operating system before they get there. You can

fix applications without access to the source code, or without changing them at all. You incur a minimal amount of additional

Page 8: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 8/19

management overhead (for the shim database), and you can fix areasonable number of applications this way.

• Change policy. When a particular feature breaks a number of applications, you may want to disable that feature. The advantage is

similar to using shims—you don't have to change or even have accessto the source code. And the disadvantages are similar as well—lack of

support and inability to fix everything. Some people consider thisapproach for Web applications, where shims aren't an option. Someof the security features can be controlled individually and disabled asa stopgap solution..

• Application virtualization. There is a lot of confusion around application virtualization as an application compatibility solution. I have heard it described as a complete separation of the application

from the underlying OS, and therefore a complete and foolproof

solution. This is emphatically untrue today. With the exception of the file and registry calls, the application still calls the underlying OS,and any compatibility issues outside of the file system or registryremain unfixed. It is great for application-to-application conflicts, but not a generic solution for application to OS conflicts. Support statusis unknown but likely not in your favor, as not every company

supports software within application virtualization even if it is supported natively on the OS.

• Machine virtualization and terminal services. Machine virtualizationis your brute force method. You know it's going to work, because youare actually running it on a previous version of the OS, whether on

your local machine or on a server somewhere. It almost always puts you in a supported scenario, since you're actually running it on a supported operating system. But, while some say "virtualize it all,migrate today, and fix things later," I tend to be more cautious. Thereis management overhead, since you're managing potentially doublethe number of operating systems per user.

Q3:- Write down the basic standards you will follow while coding in yourproject.

Page 9: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 9/19

Ans :-

Basic Standards

These are the basic standards you need to be familiar with. They come up inalmost any discussion of XML.

SAX

The Simple API for XML was a product of collaboration on the XML-DEV mailing list rather than a product of the W3C. It's included here because it has the same"final" characteristics as a W3C recommendation.

You can think of SAX as a "serial access" protocol for XML that is ideal for stateless processing, where the handling of an element does not depend on any of the elements that came before. With a small memory footprint and fast execution speeds, this API is great for straight-through transformations of datainto XML, or out of it. It is an event-driven protocol, because you register ahandler with the parser that defines one callback method for elements, another for text, and one for comments (plus methods for errors and other XMLcomponents).

StAX

The Streaming API for XML is a Java "pull parsing" API. This API also acts like a"serial access" protocol, but its processing model is ideal for state dependent

processing. With this API, you ask the parser to send you the next thing it has,and then decide what to do with what it gives you. For example, when you're in aheading element and you get text, you'll use one font size. But if you're in anormal paragraph and you get text, you'll use a different font size.

DOM

Document Object Model

The Document Object Model protocol converts an XML document into acollection of objects in your program. You can then manipulate the object model in any way that makes sense. This mechanism is also known as the "randomaccess" protocol, because you can visit any part of the data at any time. You canthen modify the data, remove it, or insert new data.

JDOM and dom4j

Although the Document Object Model provides a lot of power for document-oriented processing, it doesn't provide much in the way of object-oriented simplification. Java developers who are processing more data-oriented

Page 10: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 10/19

structures--rather than books, articles, and other full-fledged documents--frequently find that object-oriented APIs such as JDOM and dom4j are easier touse and more suited to their needs.

Here are the important differences to understand when you choose between the

two:• JDOM is a somewhat cleaner, smaller API. Where coding style is an

important consideration, JDOM is a good choice.• JDOM is a Java Community Process (JCP) initiative. When completed, it

will be an endorsed standard.• dom4j is a smaller, faster implementation that has been in wide use for a

number of years.• dom4j is a factory-based implementation. That makes it easier to modify

for complex, special-purpose applications. At the time of this writing,JDOM does not yet use a factory to instantiate an instance of the parser

(although the standard appears to be headed in that direction). So, withJDOM, you always get the original parser. (That's fine for the majority of applications, but may not be appropriate if your application has special needs.)

DTD

The Document Type Definition specification is actually part of the XMLspecification rather than a separate entity. On the other hand, it is optional; you can write an XML document without it. And there are a number of schemastandards proposals that offer more flexible alternatives. So the DTD is

discussed here as though it were a separate specification. A DTD specifies the kinds of tags that can be included in your XML document,along with the valid arrangements of those tags. You can use the DTD to makesure that you don't create an invalid XML structure. You can also use it to makesure that the XML structure you are reading (or that got sent over the Net) isindeed valid.

Unfortunately, it is difficult to specify a DTD for a complex document in such away that it prevents all invalid combinations and allows all the valid ones. Soconstructing a DTD is something of an art. The DTD can exist at the front of the

document, as part of the prolog. It can also exist as a separate entity, or it can besplit between the document prolog and one or more additional entities.

However, although the DTD mechanism was the first method defined for specifying valid document structure, it was not the last. Several newer schemaspecifications have been devised. You'll learn about those momentarily.

Page 11: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 11/19

Namespaces

The namespace standard lets you write an XML document that uses two or moresets of XML tags in modular fashion. Suppose for example that you created an

XML-based parts list that uses XML descriptions of parts supplied by other

manufacturers (online!). The price data supplied by the subcomponents would beamounts you want to total up, whereas the price data for the structure as a wholewould be something you want to display. The namespace specification definesmechanisms for qualifying the names so as to eliminate ambiguity. That lets you write programs that use information from other sources and do the right thingswith it.

The latest information on namespaces can be found at

XSL

The Extensible Stylesheet Language adds display and transformationcapabilities to XML. The XML standard specifies how to identify data, rather thanhow to display it. HTML, on the other hand, tells how things should be displayed without identifying what they are. Among other purposes, XSL bridges the gapbetween the two.

The XSL standard has two parts: XSLT (the transformation standard, described next) and XSL-FO (the part that covers formatting objects). XSL-FO lets specify complex formatting for a variety of publications.

PART- B

Q1:- Create black/white box testing document for your software project.

Page 12: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 12/19

Ans :- Test Cases for Unit Test

Unit testing requires testing both the unit's internal structure and its behavioral characteristics. Testing the internal structure requires a knowledge of how the unit isimplemented, and tests based upon this knowledge are known as white-box tests. Testing a unit's behavioral characteristics focuses on the external observable behaviors of theunit without knowledge or regard its implementation. Tests based upon this approach arereferred to as black-box tests. Deriving test cases based upon both approaches aredescribed below.

White-Box Tests

Theoretically, you should test every possible path through the code. Achieving such a goal, in all but very simple units, is either impractical or almost impossible. At the veryleast you should exercise every decision-to-decision path (DD-path) at least once,

resulting in executing all statements at least once. A decision is typically an if-statement,and a DD-path is a path between two decisions.

To get this level of test coverage, it is recommended that you choose test data so that every decision is evaluated in every possible way. Toward that end, the test cases should make sure that:

• Every Boolean expression is evaluated to true and false . For example the expression (a<3) OR (b>4) evaluates to four combinations of true / false

• Every infinite loop is exercised at least zero times, once, and

more than once.

Use code-coverage tools to identify the code not exercised by your white box testing. Reliability testing should be done simultaneously with your white-box testing.

Example:

Assume that you perform a structure test on a functionmember in the class Set of Integers. The test - with thehelp of a binary search - checks whether the set contains agiven integer.

Page 13: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 13/19

The member function and its corresponding flowchart. Dotted arrows illustrate how you can use two test cases to execute all

the statements at least once.

Theoretically, for an operation to be thoroughly tested, the test case should traverse all the combinations of routes in the code. In member , there are three alternative routesinside the while-loop . The test case can traverse the loop either several times or not at all. If the test case does not traverse the loop at all, you will find only one route throughthe code. If it traverses the loop once, you will find three routes. If it traverses twice, youwill find six routes, and so forth. Thus, the total number of routes will be1+3+6+12+24+48+., which in practice, is an unmanageable number of routecombinations. That is why you must choose a subset of all these routes. In this example,

you can use two test cases to execute all the statements. In one test case, you might choose Set of Integers = {1,5,7,8,11} and t = 3 as test data. In the other test case, youmight choose Set of Integers = {1,5,7,8,11} and t = 8.

Black-Box Tests

The purpose of a black-box test is to verify the unit's specified behavior without looking at how the unit implements that behavior. Black-box tests focus and rely upon the unit'sinput and output.

Equivalence partitioning is a technique for reducing the required number of tests. For every operation, you should identify the equivalence classes of the arguments and theobject states. An equivalence class is a set of values for which an object is supposed tobehave similarly. For example, a Set has three equivalence classes: empty , some

element, and full .

Use code-coverage tools to identify the code not exercised by your white box testing. Reliability testing should be done simultaneously with your black-box testing.

The next two subsections describe how to identify test cases by selecting test data for specific arguments.

Page 14: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 14/19

Test Cases based upon Input Arguments

An input argument is an argument used by an operation. You should create test cases byusing input arguments for each operation, for each of the following input conditions:

Normal values from each equivalence class.• Values on the boundary of each equivalence class.• Values outside the equivalence classes.• Illegal values.

Remember to treat the object state as an input argument. If, for example, you test anoperation add on an object Set , you must test add with values from all of Set 'sequivalence classes, that is, with a full Set , with some element in Set, and with an empty

Set .

Test Cases based upon Output Arguments

An output argument is an argument that an operation changes. An argument can be bothan input and an output argument. Select input so that you get output according to eachof the following.

• Normal values from each equivalence class.• Values on the boundary for each equivalence class.• Values outside the equivalence classes.• Illegal values.

Remember to treat the object state as an output argument. If for example, you test anoperation remove on a List , you must choose input values so that List is full, has someelement, and is empty after the operation is performed (test with values from all itsequivalence classes).

If the object is state-controlled (reacts differently depending on the object's state), you should use a state matrix such as the one in the following figure.

A state matrix for testing. You can test all combinations of stateand stimuli on the basis of this matrix.

Page 15: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 15/19

Q2:- Design all the possible test cases to test Windows Media PlayerApplication of Windows 7 or Windows Vista. User can try to play any kind of fileavailable in the system so derive the test case accordingly.

Ans :- This feature enables Firefox to hand off content to both webapplications (like Google Calendar) and to local applications that havebeen registered to handle this type of data (like Mozilla Sunbird).

There is a large number of such data types and web/local applicationtypes. For our testing, we will try to cover all of the most likely typesand popular applications.

The list of popular data formats and protocols that this feature aims tohandle:

• mailto:• hCalendar, (text/calendar), webcal:• hCard (text/vcard)• geo links (application/vnd.google-earth.kml)• audio feeds (audio/x-mp3)• video feeds• RSS• Image file type• Application (executable) file type• callto: protocol

Scope of planned testing

It is not possible to test every version of every application on every OSfor every type of feed/format combination. Doing so would result in agigantic test matrix. We have instead taken the view to test the most

prominent applications for each of these items on each platform. Wewill also run some automated tests against the code to verify that

extension authors have the required abilities to create their own protocol handlers and that users can associate other applications tohandle specific types of content.

Platform and Configurations

We will want to test with the most popular web and platform specific applications for each high-priority type of format we want to support.

Page 16: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 16/19

TODO: Is there a need to test cross platform products on more thanone platform? Because at some point, you end up testing that product and not the content handler integration.

Protocol/File Type Windows Apps

mailto:

• Outlook 2007 • Windows Mail (Outlook Express)• Thunderbird

text/calendar

hCalendar, webcal:

• Outlook 2007 • Outlook 2003• Windows Calendar (Vista Only)• Mozilla Lightning or Sunbird

hCard and text/vcard

• Outlook 2007 • Outlook 2003• Thunderbird

* Geo Links (type of microformat)• application/vnd.google-

earth.kml • TODO: Are there any thick clients for this

Audio Feeds• (audio/x-mp3)

• Windows Media Player • iTunes• Real-Player • TODO: Others?

Video Feeds

• Windows Media Player • iTunes• Democracy Player • Real-Player

RSS

• Thunderbird • RSS reader in Vista Desktop (not possible

without changes to Firefox)• RssReader

Image Files • Paint

PDF Files • Adobe Reader

callto: • Skype

Page 17: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 17/19

• Gizmo

Asteriks designate items with Test Cases in Litmus

Major Test Areas

Major test areas are to test the applications in the table above withthe proper type of content. We must ensure that the content types arehandled appropriately in the handoff between Firefox and the third

party application. We will probably use Litmus to test this.

Q3 :- Identify the situations which can cause project failure during testing

Ans :- Projects Fail

Computer projects fail when they do not meet thefollowing criteria for success:

• It is delivered on time.• It is on or under budget.• The system works as required.

Only a few projects achieve all three. Many more are delivered which fail on oneor more of these criteria, and a substantial number are cancelled having failed badly.

So what are the key factors for success? Organisations and individuals havestudied a number of projects that have both succeeded and failed and somecommon factors emerge. A key finding is that there is no one overriding factor that causes project failure. A number of factors are involved in any particular

project failure, some of which interact with each other. Here are some of themost important reasons for failure.

1. Lack of User Involvement

Lack of user involvement has proved fatal for many projects. Without user involvement nobody in the business feels committed to a system, and can evenbe hostile to it. If a project is to be a success senior management and usersneed to be involved from the start, and continuously throughout thedevelopment. This requires time and effort, and when the people in a businessare already stretched, finding time for a new project is not high on their priorities.

Page 18: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 18/19

Therefore senior management need to continuously support the project to makeit clear to staff it is a priority.

2. Long or Unrealistic Time Scales

Long timescales for a project have led to systems being delivered for productsand services no longer in use by an organisation. The key recommendation isthat project timescales should be short, which means that larger systems should be split into separate projects. There are always problems with this approach,but the benefits of doing so are considerable.

3. Poor or No Requirements

Many projects have high level, vague, and generally unhelpful requirements.This has led to cases where the developers, having no input from the users,build what they believe is needed, without having any real knowledge of thebusiness. Inevitably when the system is delivered business users say it does not do what they need it to. This is closely linked to lack of user involvement, but goes beyond it. Users must know what it is they want, and be able to specify it

precisely. As non-IT specialists this means normally they need skills training.

4. Scope Creep

Scope is the overall view of what a system will deliver. Scope creep is theinsidious growth in the scale of a system during the life of a project. As anexample for a system which will hold customer records, it is then decided it will

also deal with customer bills, then these bills will be provided on the Internet, and so on and so forth. All the functionality will have to be delivered at one time,therefore affecting time scales, and all will have to have detailed requirements.This is a management issue closely related to change control. Management must be realistic about what is it they want and when, and stick to it.

5. No Change Control System

Despite everything businesses change, and change is happening at a faster ratethen ever before. So it is not realistic to expect no change in requirements whilea system is being built. However uncontrolled changes play havoc with a system

under development and have caused many project failures.

This emphasises the advantages of shorter timescales and a phased approachto building systems, so that change has less chance to affect development.

Page 19: 13603_Hw 4 CAP 314

8/8/2019 13603_Hw 4 CAP 314

http://slidepdf.com/reader/full/13603hw-4-cap-314 19/19

6.Poor Testing

The developers will do a great deal of testing during development, but eventually the users must run acceptance tests to see if the system meets the businessrequirements. However acceptance testing often fails to catch many faults before

a system goes live because:

• Poor requirements which cannot be tested • Poorly, or non planned tests meaning that the system is not methodically checked • Inadequately trained users who do not know what the purpose of testing is• Inadequate time to perform tests as the project is late

Users, in order to build their confidence with a system, and to utilise their experience of the business, should do the acceptance testing. To do so they need good testable requirements, well designed and planned tests, beadequately trained, and have sufficient time to achieve the testing objectives.

Conclusion

These six factors are not the only ones that affect the success or failure of a project, but in many studies and reports they appear near, or at the top of the list.They are all interlinked, but as can be seen they are not technical issues, but management and training ones. This supports the idea that IT projects should betreated as business projects.