From Florian Mueck blog. Not a funny article, but a serious well structured and with clear points to extract from the movie character about decision taking and leadership.
I hope you enjoy it as much as I have.
Unlike other disciplines, software development products cannot be fully validated before they are built. When you design a building or a bridge, applying physical models and laws, it is possible to guarantee that the construction will not collapse under normal conditions and excluding construction issues like concrete composition or the design metrics not been respected. This allows to undertake very complex architectural projects that otherwise would be very risky to assume. In summary, because there is science behind, mankind has the means to achieve mastery in construction.
Software engineering has the aspirational objective of achieving this maturity but unfortunately it is not possible to create a model of what you are building without building it. The model itself, once finished becomes the deliverable. The closer comparison would be moving your software from your development environmet to your production environment, with any intermediate steps you may have (test,preproduction).
To overcome this, and provide the discipline with all kinds of warrants during every step of the implementation, software engineers have come out with a wide variety of models, principles and patterns that guide and inspire them throughout the process, bringing light to a task that otherwise would remain uncertain in terms of achieving its goal and mystical in the means that intervene.
Even when this is true, gurus, ideologist, visionary and Illuminati in the field have created a terminology to keep the knowledge in the initiated circle.
Hence, we have that OOP, a paradigm in itself, should be SOLID, while within our GRASP, DBT must be ACID what can be weird for newcomers. You better adjust yourself from the beginning being TDD and/or BDD, maybe modularizing following an AOP approach.
The list is endless even if we keep it limited to Programming paradigms. This doesn’t mean that the rest of science and engineering is free of paradigms that can not be elevated to the category of laws or theorems nor rejected as partial or wrong.
The future though is promising, if we observe scientific paradigms they are at the boundaries of the discipline. It´s not adventorous to consider paradigms progress pushers and hence responsible for any evolution.
The difference is that computer science is a relatively young discipline and paradigms outnumber theorems.
And while it could be contradictory, most of the theorems (Turing Completeness,P vs. NP,Halting Problem, Turing Test) are not relevant to the more daily tasks and problems a computer engineer may require, although some appear or should be considered more often than intruders in the profession may want to admit.
Also, efforts have been made to provide powerful tools for more regular programming tasks. Formal verification lies in this category, but reality is that we don’t see it used very often. Its complexity make it overwhelming for many tasks that would benefit from using it but it should still be kept in mind as a resource.
While computer science doesn’t reach maturity, paradigms are the crutches that help us get to our destination, the baby walker that guides our intuition and feeds our experience to achieve our goals in computer engineering and software development. They are our best option while the Vademecum for software development is not validated, approved and published.
It is difficult to disagree or agree but I would highlight this paragraph from the conclusions in the article:
In fact, using a model such as The Frog and the Octopus is an effective way to engage people infected with Agile Fever because it removes reference to Agile from the discussion of software development altogether. Instead of putting an infectee immediately on the defensive by suggesting, for example, that an Agile based software development process may not be the best fit, a better tactic is to discuss the applicability and value of adopting development strategies in a general sense. Those discussions might include considerations that are typically associated with Agile based processes such as requirements flexibility, schedule elasticity, extent of customer engagement, ability to co-locate the staff, and any other attributes that might be relevant to the program’s context.
As a developer I have never had that feeling of “this or that programming language or technology fits all needs” and as a project manager the homologous condition would be equally dangerous. Methodologies and processes are a reference that always must be adapted to the specific needs of a project and organization.
In the culture of a company, the trait that more clearly marks the enterprise personality and at the same time the trait that is more influenced by the rest of the company philosophy is error management, at all levels of the organization.
Error is part of human behavior and complex environments where information is always incomplete, obsolete in a matter of hours or days and has a level of uncertainty as well as being affected by source subjectivity only helps to make any business more error prone. Given this situation, one decision a company takes, consciously or inconsciously, is where to focus its error management effort. This decision is driven either by business needs or quite often by fears, status quo preservation and politics. Even if the decision is the same, the way followed to take the decision and, more importantly, the perspective to review the decision periodically and adjust the business according to the new situation is vital for long term survival in the marketplace.
The options for the company are three.
Let’s discuss them:
Error avoidance, except in business where the impact of errors can have severe consequences, e.g. pharmaceutical, energy, airlines, this is an attitude that nowadays no company can hold for long. Maybe some years ago when technology evolution and knowledge distribution happened at a much slower pace, big corporations could afford and even pressure so that their field incorporate barriers to entry to preserve their oligopolistic situation. In such context, there is no need to look at the rear-view mirror nor to take risks by innovating in the business. These kind of companies tend to have heavy bureaucratic processes, senior management committees who approve every single decision (sometimes without considering or knowing the low level details) and errors are punished with early retirement or cornering of the nominated responsible. With this context, the stimulus to innovate is low and inertia quickly appears on the stage. The next step is that reasons that justify and drive business decisions are forgotten, like in this business tale: “Five monkeys, a banana and corporate culture” here the graphical version (from http://www.slideshare.net/shaldag/a-story-about-5-monkeys):
Error prevention, is the category where traditionally more companies have been working. Companies know their business, they work hard to keep pace on leadership or to catch the leader competitor. Metrics are taken prior, during and after important transition phases in the product or service development cycle and adjustsments are done according to the feedback obtained from the controls in place. The more agile, precise, efficient and syncronized with your customers that you are the more likely to perform better in your sector. Zero error does not exist but still must be pursued. Every error cost money, and the longer the error propagates in the process the more money it costs because more work has been put on it and/or you need to go further to fix it. This is the policy that has helped more to develop manufacturing, development and design processes, methodologies and certifications: six sigma, lean and a variety of continuous improvement variants emerge from this value generation philosophy. Root Cause Analysis(RCA), Business Impact Analysis (BIA), Risk Assessment (RA), Cost-benefit analysis (CBA), Enterprise Feedback Management(EFM), Return of Investment (ROI),… are metrics and techniques used to decide where to focus the effort while improving, starting or closing a manufacturing process of any kind.
Error mitigation, is the philosophy of the digital age and the Internet. It´s a mantra for those eager of success, who can’t wait to share their work to others for their mutual enjoyment. Why bothering looking for errors under the microscope if there will always be more and the environments, usage models and user needs are so different that it would be impossible to cover a 1% of the test matrix? What if someone releases before you do and takes the success that otherwise would have been yours? Why should you worry about publishing if you can republish with total flexibility, at almost no cost and little disturbance to users(in most cases)? The physical world is not relevant, is just a temporary container and a medium for the bits being transferred… (Ok, I got too metaphysical). The Internet makes content distribution easy and potentially universal and social networks activate this potential so the important thing is to release new concepts, so that internauts can choose from a myriad of apps, services, products and talk about them for the good or the bad. Based on that you set the new course for your product, fix issues, add functionalities (and remove). And so that the process is not so crazy as it could appear you take the concepts already mature and you apply them to your context, Lean Startup. You don’t release version 1.0 but version 0.1 alpha. Somebody told you that it was the Minimum Viable Product (MVP) of what your service would be in the future, and you thought that you were lucky that someone knew what your product was going to be because you were struggling to define beyond what you had released, except for those crazy ideas that were too costly for an alpha of the unknown.
Related to startups and innovation, creativity/innovation and a process driven development methodology are not incompatible, in fact, the latter releases resources and fosters the former. The more structured your tools and processes are the less error prone your environment will be and less attention will be required to low added value details. This focus can then be transferred to more creative and valuable activities that will help you to differentiate your newbie. You must embrace a methodology that is suitable for your needs. Specially if you need to collaborate with others (and few projects can be developed isolately) you must be able to structure your processes, to make sure everything you do fits in your plan and where it does. Otherwise you will be changing your course too often, getting nowhere, delivering nothing.
Looking up “error management theory” in the wikipedia, we find:
In the decision making process, when faced with uncertainty, a subject can make two possible errors: type I or type II. A type I error is a false-positive or in layman’s terms, playing it safe. A fire alarm that later turns out to be a false alarm is a type I error. A type II error is a false-negative, or the siding with skepticism. Ignoring the fire alarm because it is often wrong, but it later turning out to be accurate is a type II error.
Error avoidance organizations are keen on false positive (type I) errors, while error mitigations ones are comfortable dealing with false negative (type II) errors. Error prevention are type II who want to become type I although in the deepest of their hearts enjoy their type II features as are the ones who make them grow wiser, stronger, faster every day.
With this scenarios, you must know where is your organization and realize if it’s the place you must be, how long can you afford to stay there if a change is needed, or in the best case, how to reassure your position so that your evolution remains uptodate in terms of development life cycle. More importantly, this can help you to understand why people in your company acts and reacts in certain ways when facing a new task, a new challenge, a tight deadline,…