Recently there have been several high profile software disasters, with broken updates crippling devices. (I don't want to name them.)
Am I mistaken or is this caused by a focus on fast, cheap development with lots of new unwanted features in a war of escalation against competitors?
It seems to be standard practice now to have hundreds or even thousands of known defects during development and nonetheless choosing to launch new software versions containing huge numbers of known software defects. They are then debugged on-market by a different team of fixers.
There seems to be a "not-our-problem" attitude in software development leading to huge technical debt.
Maybe poor implementation of Agile is to blame?
Or am I on the wrong track?
submitted by /u/postmodernist1987
[link] [comments]
from Software Development – methodologies, techniques, and tools. Covering Agile, RUP, Waterfall + more! https://ift.tt/WLOoNAG