<- Googleoff: Index -> <- googleon: Index ->
Foundation Capclaim fights led by Kenneth Berk Live from the bankrupt Equihold against ICT services provider Capgemini. The aim is to recover 43 million damage. Capclaim has in recent months documentation available from the market experts to let them decide on this matter. Computable experts also had access and the next time through this website will share their findings. This second contribution in the five parts of Kurt King is about the quality of the software delivered by Capgemini
.
The core of the Capclaim which Kenneth Berk Live has outstanding the quality of the software delivered by Capgemini to Equihold. Quality can be determined in several ways and is often difficult to make lens. To assess quality objective must be measurable and verifiable quality studies:. (!) Someone else to come to the same conclusion as those carrying out the research in the same way
A total of eight studies carried out into the quality of the software. Because these paid surveys and initiated by one of the parties, it is important to look extra good how the studies were conducted in order to identify the objectivity of the findings from the research properly.
Business Layer
The first two studies took place in January 2007 and carried out by employees of Capgemini. It was occasioned by Equihold expressed doubts and complaints about the quality of the delivered software and specifically the lack of the business layer.
The lack of business layer means that make the parameter-driven software in order to support different sports (not all sports have eleven field players, two halves and 45 minutes of playing time) was impossible.
Also, the compiler FxCop report played a role. This came to over eleven thousand warnings in the delivered software, which is an indication of the quality of the software
Recommendations
.
In the executive summary of This report describes the quality is rated as “generally good”, although the quality is not consistent. This means can be a blow to improve parts of the software. For this purpose are called concrete recommendations:
• Do not create unnecessary objects
• Delete files from the software that are not used
• Make more use of comments.. .
• Ensure that all users have events and error handling.
The report gives examples based on which the recommendations are given. It also indicates that the separation of the different layers properly and in accordance with the technical specifications is performed. In the report is not clear whether this layer is in place and functioning as such or is only present without further features. That this is an essential difference, will appear from a follow-up study. Further substantiate why the software is “generally good” is not given.
In this same report refers to an earlier study, also conducted by Capgemini, which is specifically considering whether according to the guidelines is programmed. This second report is not shared, even after insistence Equihold
An empty business layer
.
In January 2008 by an employee of the Equihold Software re-examined, with the business layer is present, but “empty” and therefore no function. This is in contradiction with the first Capgemini report of January 2007 in which the presence of this business layer is sometimes reported without further addition or change.
In 2010 Equihold early 2009 its activities have been forced to cease, is the software again assessed by the same Equihold employee. This report is in fact an indictment of the process of Capgemini and criticizes the delivered software, which is useless in his eyes. Several cases are assessed as severely inadequate and gives examples. Again, it is noted that the business layer is present but has no real function
External research into the quality
.
In October 2010 asked Equihold on Software Quality Measurement and Improvement (sqmi) to subject the delivered software to investigate. Sqmi uses a methodical approach in which the number of lines of code to be counted a failure and grouped into categories. On average, 175 people came to erroneous lines per thousand lines of code yielding an F rating, a three on a scale from one to ten. The second lowest score in the history of sqmi.
Based on this study, the conclusion is that the maintenance cost for this item is unnecessarily high and the software as a platform for further development is unsuitable. Noteworthy here is that things are found that are put forward in 2007 as recommended by Capgemini. For me the question arises similar to what is done with the recommendations. Ignored or implemented but not locked or anything else
Again an internal investigation
?
In February 2013, Capgemini has an extensive internal investigation carried out into the state of affairs. The summary is noted with respect to the software quality that there are errors in but not of such a nature that it is the function of the software is in the way. This finding is contrary to the experience with the software by Equihold and (pilot) customers
Another external study on the quality
.
In 2014, Capgemini has the Software Improvement Group (SIG) asked to investigate the software maintainability. Means they have used the method SIC / TÜViT quality model for Trusted Product Maintainability (April 2009). This is also a methodical approach based on counts and a framework of standards.
Last June. They have presented this report with the main conclusion that the maintainability is the market average, “relative to the benchmark used by SIG. It does not have to be the same benchmark used by sqmi and when it comes to an F rating. Note that SIG directly investigated the maintainability of the software and sqmi software in general and thereby concluded as derivative that maintainability is unnecessarily expensive
Questions
<-.! Text - ->
SIG / Capgemini is employing the principle that the quality of software is determined by the degree of maintainability. I ask myself whether this is an equal right principle. For how should we interpret these reports? The reports make statements that (seemingly) contradict each other. In some cases lacks the support and rendered a decision that can not be quantified. During the trial, these reports will also be discussed. What weight would grant the right to these reports, separately and at all?
Also, I wonder whether the eight studies conducted are enough to make a judgment about the quality of the software. The investigations focus on maintainability, technical programming and technical requirements (with focus on the business layer), while stability and performance are called a few times. Whether the software meets the established functional requirements is reflected in a subsequent article in which the test and the test report (FAT) is examined.
In order to judge the quality of the software are the quality characteristics mentioned than completely? For completeness, the defense Capgemini calls the investigation software is not the latest version and the latest software (almost) free of defects. In several (Equihold-) examine the number and severity of defects so that short-term correction is not deemed possible and rebuild the only way to make the software meet the expectations.
That leaves many questions unanswered.
In this five-part of the King Kurt follow three parts. The first part was about the tender, hereafter follow contributions on the effort or performance obligation, governance and how this could happen. Then several other Computable experts will give their opinion on the available documentation
.
No comments:
Post a Comment