Software Development Lifecycle Overview

Introduction

Software engineering is a wide field but plays a focal role in software development and implementation. One of the platforms included in this field is the software development process which entails division of the software development work into significant but distinct phases aimed at improving the design, project management and product management. The process has also been associated with the software development lifecycle, which constitutes a series of phases said to avail the common understanding of the entire process of software building. The latter provides for the stages ranging from realization of the software, its development and conversion of the requirements and ideas into functions. A good engineer is therefore required to understand the SDLC model depending on the business requirements and project context. Some of the models include the waterfall model, spiral method, v-shaped model, iterative and incremental method and evolutionary prototyping model. The discussion will interact with three process models namely, waterfall model, rapid application development and extreme programming.

Waterfall Model

The process model, lifecycle and operations

Waterfall SDLC model is substantially referred to as the sequential software development process where the progress flows increasingly downwards. This can be achieved via a list of phases which need to be executed in the order established for the purposes of attaining the successful process of developing the computer software. The model was initially proposed by Dr. Winston W. Royce in the year 1970. It was preliminarily referred to as the linear sequential lifecycle model and it is commonly considered as the earliest SLDC approach (Bassil 2012). The mode of the lifecycle and operation for this model is considered to be simple and easy to understand as a result of its linear sequential flow. The entire process involved in software development is essentially divided into significant phases. This implies the output of one phases becomes the input of the next phase. If the outcome is not produced at in phase, the stage is consistently repeated until the output is achieved before proceeding to the next phase. As a result, the model is commonly denoted to be recursive in the sense that every phase can endlessly be repeated until it hits perfection. The cycle is as shown below.

The Waterfall Model and the phases engaged in the model, as adopted from Bassil

The initial phase of the model is the requirement analysis phase, which is known for sharing the potential requirements, guidelines and the deadlines assigned to the project befpre putting them into the functional specification. The phase narrows down to definition and planning of the entire project without necessarily mentioning the specific process (Bassil 2012).. It is worth noting that the phase also explores both the non-functional and functional requirements. The functional ones are expressed through use cases confined to user’s interaction with the involved software. Such requirements constitute the purpose scope, software attributes, functions, perspective, functionalities specification, user characteristics and even the database requirements (Petersen et al. 2009). The non-functional requirements narrow down to a range of criteria, limitations, constraints and limitations which are imposed to the design as well as operation of the software. Significant properties under observation in this phase include availability, performance, reliability, maintainability, quality standards, scalability and testability. The second phase is the system design, which entails the entire process involved in planning ad solving the problem that calls for the attention of the software. Commonly, the stage would implicate the software designers and developers in defining the plan for the solution that incorporates the algorithm design, database conceptual schema, software architecture design and the logical diagram design, the data structure, the concept design and the graphical user interface design. The third phase is the implementation phase in which the developers have to realize the business requirements as well as the design specifications while translating t into the concrete executable program, website and even database among other platforms. The code is written as well as compiled at this stage and the text files and the database being created. It entails conversion of the blueprints and requirements into the most appropriate production environment. The fourth phase is the testing phase and largely covers verification and validation. The process entails checking the extent to which the software solution addresses the intended goal based on the specification and the original requirements. Commonly, verification covers the process involved in evaluating the software while determining the development phase has satisfied the necessary conditions. This phase also gives a leeway for debugging, correction of the system and refining it accordingly. The fifth phase is the deployment phase in which the application or the product is deemed functional and therefore deployed in a live environment. The last phase is the maintenance phase which covers modification of the software solution while refining the output, improve the performance correct errors and boost the quality of the final product.

Whatsapp

Analysis of the rationale behind waterfall model

The adoption of waterfall model is intended to negotiate the resources assigned to IT and software-related projects meant to meet consumer needs. In this sense, the scope, the resources, deliverables and even the budget are absolutely managed on the disciplined basis. Based on this, the waterfall model carries with it technological and business implications as far as the intent to adopt development frameworks, build and maintain products is put into consideration. Companies would commonly find the optimal grounds and resources that need to be introduced for the purposes of completing a specific task.

Advantages and disadvantages

The waterfall model has a number of advantages. First, the model is simple as well as easy to understand given the flow of phases from the requirements analysis to maintenance phase. Secondly, phases are processed as well as completed one at a given time, which is more convenient to reach fruition (Bassil 2012).. Besides, it is easy to manage the project using this model due to the stability and rigidity of the model. In addition, the model has clearly defined stages, milestones and results. However, there are still many disadvantages of the waterfall model including the high level of risks and uncertainty, no results of the software can be pre-established before the end of the lifecycle and complications while trying to measure progress.

Rapid Application Development Model (RAD)

The process model, lifecycle and operations

Rapid Application Development Model has attracted different descriptions and analysis depending on the case scenarios and the contextual requirements. However, a number of literatures have agreed on the idea that the RAD model is simply a development model which sets priorities on rapid prototyping, as well as provide quick feedbacks via development as well as testing cycles. With the help of the RAD model, developers can easily make iterations as well as update the software without starting the schedule from scratch every time. RAD came in place to rectify the ineffectiveness of the waterfall model, which includes the rigid testing phase that never gives room for adjustments of the core functions as well as features of the involved software once testing is done (Viceconti et al. 2007). The operations of the RAD model depend on the 4-stage lifecycle as shown below.

RAD Lifecycle Stages as adopted from Viceconti

The preliminary phase in the RAD model includes the Requirements Planning which covers a range of objectives. The RAD model intends to establish general but relevant understanding of the business problems covered in the development as well as eventual operations. In addition, the RP aims at bridging familiarization with the prevailing systems and finally, the stage fosters identification of the business processes which may be supported via the proposed software. The phase allows end users, IS professional and even business executive to form as well as take part in the Joint Requirements Planning. RP covers research on the current situation, define the requirements and finalize on the requirements. In the course of researching the current situation, the RAD model relies on the preliminary discussions done through the JRP workshops. Then, requirements can be defined with deliverables being defined through the JRP workshops (Viceconti et al. 2007). Finally, the RP stage demands that the requirements ought to be documented with estimates on duration and costs being established. The scope needs to be perfectly defined with approvals being extended to the implementation phase. The second phase is the user design phase which focuses on analyzing business activities associated to the proposed system area, analyzing the business data linked to the proposed system area as well as developing the system structure based on the automated as well as manual functions. More targets include development of the manual and automated functions, and selection of the most convenient construction approach (Pop and Altar 2014). The third phase is the rapid construction phase. This phase aims at completing a more detailed designed of the recommended system, create as well as test the application or software, generate a system which functions on an acceptable level of the performance, preparing documentation needed for the proposed application and performing the required steps needed in converting the system into the required production status. The last bit constitutes the transition which aids installation of the system with minimum disruption of the business activities, maximization of the system’s efficiency and identification of areas that may need future enhancement.

Analysis of the rationale behind Rapid Application Development

The RAD model aids development an efficient system that adds value from the point of design to the launching point. This would constantly give positive results on the side of the business as well as approval of the technology put in place. The RAD model is essentially conceived for this reason and purpose that aids development of prototypes and rapidly testing for the features d functions without necessarily worrying about how the end product can be affected. The prime focus of the framework under consideration is emphasize in speed while reducing on the development time as compared to others, which are likely to take more time.

Advantages and disadvantages

Making use of RAD model has its own pros and cons. Based on the advantages, the framework is believed to present better quality given that users are given time to interact with the prototypes as far as the business functionality is put into consideration. Secondly, developers can control the risks associated to the project given that the phases are flexible (Daud et al. 2010). Lastly, projects can be completed within the required time as well as within the planned resources. Some of the drawbacks of this model is that puts less emphasis on the non-functional requirements, it can attract poor designs as well as developers might have less control on the projects.

Extreme Programming (XP)

The process model, lifecycle and operations

Extreme programming (XP) is regarded as an agile software development framework that has the central aim of developing high quality software. It is a specific and unique framework of agile frameworks connected to the engineering practices for development of software. This is also one model that comes with a range of software engineering practices. XP operates on simple rules that constitute five basic values. The first one includes communication where the team members work jointly in every stage. Simplicity compels developers to produce simple codes that add more value to the product. The model also insists on feedback and respect, as well as courage especially where programmers are required to evaluate results without involving excuses (Ghani and Yasin 2013). Operations of this model depends on the lifecycle of XP which as shown below.

The process model, lifecycle and operations

The first phase of the XP model is planning and it plays the most fundamental role in setting goals for the project as well as iterative cycle. The team can be allowed to meet with the customers while interviewing on significant aspects of the intended software. The customer plays the fundamental role of producing or formulating the vision through user stories. This would essentially be prioritized across the release plan. The involved team can still be allowed to prepare on the time and costs of the iterations. One of the tools used for planning approach in the XP model is the critical path method. Designing is the second phase of the XP model (Ghani and Yasin 2013). This phase allows the team to effectively define main features associated to the future code. The key activity includes creating a simple design while sharing responsibilities. The simple design further calls on the application of the system’s metaphor, class names, methods and agreeable styles and formats. The design can equally attract spike solutions that would ignore risks but focus on mitigating risks. Coding is the next phase to designing. A good code is supposed to be simple thereby influencing the functionality of the product. The agreed metaphors and standards should be put into use. The final phase is testing where the codes are believed to have unit tests which are more significant in eliminating bugs.

Analysis of the rationale behind Extreme Programming

Extreme programming aids production of high quality software with reduced costs. This is possible through adoption of several short development cycles compared to the long ones which may attract poor quality. This rationale is grounded on basic values, principles as well as practices which are added on top of the commonly known agile programming framework. Common values in support of this rationale include simplicity, communication, courage, respect and feedback. Addition of value to the developed software calls for additional practices that constitute fine-scale feedback, shared understanding, continuous process and programmer welfare.

Advantages and disadvantages

Extreme programming has its own limitations and areas of strength. The key advantage of XP entails its capacity to save on time and costs as far as software development companies are put into consideration. At same time, simplicity is also more advantageous to a range of projects. The entire XP is said to be accountable and visible at the same time given that developers are committed to tasks that can be accomplished. The process is also faster and can contribute towards employee satisfactions. Areas of weakness for XP include chances of focusing on the code more than the design, which may attract ambiguity. XP rarely measures the code quality assurance, which may lead to defects across the initial code.

Comparison of the three Process Models

There are common features, characteristics and practices across the extreme programming, waterfall model and the rapid application model. The common trait across the three is that they are software development lifecycle models. This means that they remain relevant in the course of developing either software or an application. This commonality could be witnessed in the case study of Tesenso AB, which aimed at tapping into best software solutions for the single developer projects. They are all iterative in nature especially when there are failures in the course of software testing. For the waterfall method, phases that may not yield the required output may be repeated until they meet the right standards of the outcomes, which may equally be used as inputs to the subsequent phases. For RAD model, the iterations can be performed at any stage for the purposes of acquiring the best prototype while XP considers repeating and delivering small tasks, which is a process that aids perfection (Munassar and Govardhan 2010). Apart from the similarities, the three models are also different in many ways. First, XP is thought to be highly iterative when compared to the waterfall approach and the RAD model. For XP, the approach sets preference in running similar lifecycles time and again before the final product is developed against the idea of having lengthy stages. XP also involves more teamwork compared to RAD and waterfall. XP also involves customers in the process, which is a rare practice while making use of either RAD or waterfall (Balaji and Murugaiyan 2012). Besides, waterfall approach largely sticks to the planned schedule and would constantly need to start at the beginning in case changes are done to some of the stages.

However, RAD model is essentially open ended with the project intended to satisfy customer needs. XP is more structured. Another difference is that the waterfall approach is more convenient for both small and large projects, the RAD model suits small as well as medium sized projects while XP is more convenient for the large projects (Dorette Jacob 2011). The waterfall approach may need junior to senior developers while RAD model may call for the multi-talented, experienced and flexible developers. However, XP may need experienced developers. On the other, the waterfall approach has a negative approach to change while XP and RAD have a positive approach to change due to their flexibility.

Order Now

1b - Problems producing software

Most of the process models have shown their capacity in developing software as a result of the software development process. However, same models have a tendency of portraying area of weaknesses leading to failure of the production processes and the resulting consequences. This can be showcased in the case study of Standish Group of 2009 which is said to have reported 34% successful software projects, 44% of them are said to have been challenged and the remaining 22% are believed to have failed. In the subsequent 1995 Chaos Report, it could be noticed that the Requirement Engineering practices only accounted for the success of around 42% of the projects (Hussain et al. 2016). This could equally imply that inappropriate RE practices could lead to software project failure of around 43% of the total projects. Researchers could establish that among the failed projects, 70% of the requirements could not be traced or identified and most of them were never well-organized. In addition, incomplete requirements could be another cause (Hussain et al. 2016). A number of scenarios cited include requirements being difficult to be interpreted or being described, requirements being out of control, unplanned changes made to the requirements, poor planning and missing dependencies. At some point, size of the project also mattered and even the availability of resources. The ultimate consequences of project failure include waste of resources, additional costs as a result of redoing the project and lose of trust in the team involved in the development process. The second case study is that of Space Shuttle Disaster or what is commonly known as the 2003 collapse, which touched the better side of the US electrical grid. However, the prime causes could be cited along the system project failures associated with information technology. In the 2003 article, King Julia reported that at least 3 out of 10IT projects tend to fail in the end thereby touching on technology users as well. At the same time, 70% of the IT related projects are likely to fail in meeting their objectives (Frese and Sauter 2003). Most of the failed projects were either abandoned or faced cost overruns, time overruns or could not fetch the right resources within the required time (Kaur and Sengupta 2013). Some of the causal factors that led to the failed projects were not limited to unrealistic expectations, lack of planning, lack of appropriate resources and changing the requirements among others. Consequences of project failure are dire and do not only lead to additional costs but may absolutely kill the project as it has happened to some of the IT-related firms (Marques et al. 2017).

2. Function Points

A function point is regarded as a unit of measurement used in expressing the quantity of business functionality, the information system is likely to provide to the user. Therefore, function points largely measure the software size. The methodology that covers the function points is the function point counting methodology which provides the least inaccurate measurement involved in determining the size of the software. In most cases, function point counting can be done by trained as well as certified FP counters. Over the recent times, there has emerged an automated FP counting, which can essentially be performed through static analysis (D'Avanzo et al. 2015. The automated counts can easily be performed on the significant applications that constitute a range of large programs or even technologies without necessarily calling or human involvement. The function point first found its way in IBM through Albrecht who is believed to have found the method. Albrecht was first interested in measuring the size of the data processing system for the purposes of determining the development effort. The basic idea behind FP revolves around the amount of functionality which is released to the relevant user. Notably, transactions and data can substantially be evaluated in terms of the concept (Dumke and Abran 2016). Through the IBM Solution, it could be confirmed that the success of the software development measure needs to meet a number of the requirements. First, it should be consistent in terms of determining both the productivity as well as the productivity trends of the support activities, development and maintenance. The measure should also promote the actions as well as decisions said to improve the outcome of site. This could also be linked to the demonstration of actions meant to improve the output as well as the management and the estimation process (Tosi 2014). Other companies and project, apart from IBM, that have benefited from the FPs include the Convergent Telco Portals which narrowed down to assessment of the user actions and web navigation while making use of more specific information. The Telco IT Project focused at the life cycle of the software while looking at the functionalities and simplicity of the applications and system software. FPs has its own benefits and limitations. Based on the advantages of the FPs, it could be noticed that the Function Points Method plays a focal role in terms of quantifying as well as comparing the size of separate applications. The approach achieves this by simply quantifying the functionality associated to the business, which turns out to be the best way of measuring as well as comparing the size of the system software or applications. Secondly, the approach is more appropriate in comparing the pricing estimates of separate vendors. Companies, with the help of FPs, can ask vendors to account for the FP sizing associated to the proposed application as part of the response to the RFP. The comparison provides insights on the highlights, respondents and the functionality needs that can be addressed by the application (Vickers and Street 2001). In case the respondents to an RFP facilitate the FP sizing within the same range, then it means that companies can easily negotiate for better pricing. Lastly, FP is an effective tool in terms of comparing the productivity of separate teams. Companies are interested in measuring the size of input into the application development. FP comes handy due to the fact that it offers that standard measure of the output associated to every team.

Continue your exploration of Machine Learning in Equipment Maintenance with our related content.

Despite having a series of advantages, FPs also have their own disadvantages. First, the Function Point measurement is largely applicable in early stages of the software project. The subsequent parts of the project may attract approximations or estimations which can be erroneous or even attract the paradoxical phenomenon (Dumke and Abran 2016). Solution to this problem includes adoption of the Early Function Point approach, which has the capacity of identifying the software object such as data ad functionalities determined at separate level of the detail. Secondly, the IFPUG Function Points also show weakness in addressing the algorithmic aspect associated with the elementary processes. This could be noticed in its fallibility in terms of measuring domains like real time and process control of the system software. In addressing this shortcoming, the Extended Function Point approach is adopted to modify the FP count associated to the elementary process (Del Bianco et al. 2008). In addition, standards and rules applied in FPs do not give a clear as well as the unequivocal definition of who the user is. For instance, a function which makes it possible for one to control air traffic may not be as important as the person who determines the printer that directly has the log file. However, FPs would regard functions as being democratically equal. The best alternative for this problem is to avoid the Value Adjustment Factor or convert it into a qualifier. If my company has to make a choice between using and not using the FPs, then I will take the side of implementing the function point counting, which has a series of benefits. A relook at quantification of the software and determining the functionalities would help the company find value for its money. Again, the shortcomings of the FPs can still be solved by adopting advanced methods or levels of the same tool.

References

D'Avanzo, L., Ferrucci, F., Gravino, C. and Salza, P., 2015, April. Cosmic functional measurement of mobile applications and code size estimation. In Proceedings of the 30th Annual ACM Symposium on Applied Computing (pp. 1631-1636). ACM.

Tosi, L.L.S.M.D., 2014. Using function point analysis and cosmic for measuring the functional size of real-time and embedded software: a comparison.

Vickers, P. and Street, C., 2001. An Introduction to Function Point Analysis. School of Computing and Mathematical Sciences, Liverpool John Moores University, Liverpool, UK.

Kaur, R. and Sengupta, J., 2013. Software process models and analysis on failure of software development projects. arXiv preprint arXiv:1306.1068.

Marques, R., Costa, G., Silva, M. and Gonçalves, P., 2017. A Survey Of Failures In The Software Development Process.

Hussain, A., Mkpojiogu, E.O. and Kamal, F.M., 2016. The role of requirements in the success or failure of software projects. International Review of Management and Marketing, 6(7S), pp.306-311.

Frese, R. and Sauter, V., 2003. Project success and failure: What is success, what is failure, and how can you improve your odds for success. St. Louis, Missouri.

Munassar, N.M.A. and Govardhan, A., 2010. A comparison between five models of software engineering. International Journal of Computer Science Issues (IJCSI), 7(5), p.94.

Dorette Jacob, J., 2011. Comparing agile xp and waterfall software development processes in two start-up companies.

Viceconti, M., Zannoni, C., Testi, D., Petrone, M., Perticoni, S., Quadrani, P., Taddei, F., Imboden, S. and Clapworthy, G., 2007. The multimod application framework: a rapid application development tool for computer aided medicine. Computer methods and programs in biomedicine, 85(2), pp.138-151.

Pop, D.P. and Altar, A., 2014. Designing an MVC model for rapid web application development. Procedia Engineering, 69, pp.1172-1179.

Balaji, S. and Murugaiyan, M.S., 2012. Waterfall vs. V-Model vs. Agile: A comparative study on SDLC. International Journal of Information Technology and Business Management, 2(1), pp.26-30.

Petersen, K., Wohlin, C. and Baca, D., 2009, June. The waterfall model in large-scale development. In International Conference on Product-Focused Software Process Improvement (pp. 386-400). Springer, Berlin, Heidelberg.

Daud, N.M.N., Bakar, N.A.A.A. and Rusli, H.M., 2010, June. Implementing rapid application development (RAD) methodology in developing practical training application system. In 2010 International Symposium on Information Technology (Vol. 3, pp. 1664-1667). IEEE.

Dumke, R. and Abran, A., 2016. COSMIC Function Points: Theory and Advanced Practices. Auerbach Publications.

Sitejabber
Google Review
Yell

What Makes Us Unique

  • 24/7 Customer Support
  • 100% Customer Satisfaction
  • No Privacy Violation
  • Quick Services
  • Subject Experts

Research Proposal Samples

Academic services materialise with the utmost challenges when it comes to solving the writing. As it comprises invaluable time with significant searches, this is the main reason why individuals look for the Assignment Help team to get done with their tasks easily. This platform works as a lifesaver for those who lack knowledge in evaluating the research study, infusing with our Dissertation Help writers outlooks the need to frame the writing with adequate sources easily and fluently. Be the augment is standardised for any by emphasising the study based on relative approaches with the Thesis Help, the group navigates the process smoothly. Hence, the writers of the Essay Help team offer significant guidance on formatting the research questions with relevant argumentation that eases the research quickly and efficiently.


DISCLAIMER : The assignment help samples available on website are for review and are representative of the exceptional work provided by our assignment writers. These samples are intended to highlight and demonstrate the high level of proficiency and expertise exhibited by our assignment writers in crafting quality assignments. Feel free to use our assignment samples as a guiding resource to enhance your learning.