My team historically has averaged about 40 people over several years, often more than that. We have a pretty complex product that cuts across several problem domains. We switched from waterfall to scrum and looking back I have to say that neither worked (well). As one PM told me, our project was "always yellow or red" (meaning in state of high risk or seriously delayed).
Watefall "worked" bc once we were officially done with development and moved to testing, not many of officially done features actually worked and effective development has been done during (long) testing and tediously corrected. You could say that it was sort of official waterfall combined with underneath sort-of iterative process.
We switched to agile/scrum (in a very basic form). From frying pan into fire: we have terrible problems getting logically consistent product now. It's not strange given we have 5 major sub-teams (not all of them in one geo location) and Conway's law operates. Plus, repeated stress at the end of iteration over several years tends to wear out/exhaust people (that's my subjective feeling though, I do not have hard data to prove that). It's also has been terrible re keeping people long enough in one place/subsystem of the product so they could become experts and highly productive in their place: I get the impression that in this constant change we evolve towards being jacks of all trades and masters of nothing. In hindsight, I cannot tell that scrum has given us any definitive advantages (we have "scrum of scrums" of course and I can't detect that it has made any difference). Certainly, lack of big up-front design from my POV is a serious flaw: work gets unsynchronized, badly partitioned and needs to be redone or modified later.
It's like classic CS process deadlock, just moved to conceptual/design level: dev/subteam A waits for dev/subteam B to complete design X he/it relies so he could complete his/its design Y, while dev/subteam B waits for A to do his/their design Y so they/he could do X.
Scrum says nothing about infrastructure tasks apart from "cram it into some user story" (and so infrastructure suffers). Additional 3 smaller sub-topic teams (internal, external, content provider team using products of both previous ones) for particular area still can't even agree on common data representation and format. Tracking actual state of a zillion of tasks is a nightmare. The consensus in the team is that scrum has given us no benefits while imposing additional costs. Certainly it has not made us pay better attention to what customers/end users actually need.
I can't believe I'm saying this but for us waterfall worked (slightly) better than scrum (or to be more precise: it was "not as bad, if only slightly less bad").
Fortunately some subsystems are pretty much orthogonal, otherwise we'd get nothing done. This is pure random chance, though: a tad more internal dependencies between subsystems (re conceptual integrity, performance, etc) and we'd be toast. Being saved by dumb luck is no proof of effective organization or wisdom.
From my POV it's like this: if a project is small (grokkable by one person in entirety or close to it) or consists of a bunch of nearly independent concepts and features, any methodology will work. If a project is big (one person can't even grok an entire subsystem + there are lots of internal dependencies between subsystems in concepts, design, business logic and algorithms), no methodology works.
The only solution I personally see would be like this:
- do the big, up-front analysis (collecting requirements) and big waterfall-style design (principles, assumptions, business logic).
- treat it as draft to be modified, initial starting point to be modified on the fly.
- develop this "big design" iteratively, using scrum or kanban.
- have analysts and designers constantly re-evaluate big picture - whether the product stays consistent, and sync design and code.
If that wouldn't work, I have to say that for me my project is like a "Gordian knot from hell" that is basically unsolvable (well) from my POV.
This super-long background info gets me to my question: are there any PM methodologies that work well on big projects?
Further details:
I understand that map is not the road. Sure. But I can't envision getting to Alaska from SA without the map. :-) As they say in math, methodology to me is a necessary, not a sufficient condition.
Issues:
- Deadlines: tight.
- Budget: seriously underfunded.
- Scope changes: constant.
- Requirements: often changing, lots of gaps.
- Team structure: revolving doors situation, people shifted around mindlessly.
- Other issues: lots of (truly bad) legacy code, no appreciation of long-term situation and costs/benefits by mgmt.
- Oh and: no (application) SW architect.
I know that people will shoot down my vision as "asking for autopilot / panacea", but I tend to think that A. Whitehall is onto smth when he said that progress is measured by a number of things we can do without thinking. A perfect methodology would fit any team: just have the guts to follow the procedure. If "other factors" - the unknown factors - are necessary to make project succeed, what are those?
I'm starting to think that this is all futile. I've done some rough, example calculations: Imagine: n features, m classes (assuming OO programming), k software characteristics (reliability, performance in various contexts, security, implementation cost, defect rates, time necessary to get development done, testing cost, testing time, etc), o people. Potential complexity: every feature can be divided into p classes (p less-than m, with sum of all p for all features == m), impacts at most k characteristics and can require work or intervention of any subset of o people.
So worst-case complexity (number of combinations) is n*m*k*o. with 20 features, 500 classes, 10 aspects and 40 people that is 4000000 (4 mil). Assuming generous release time of 18 months, that is 222222 (222 thousand) combinations per month. Suppose we cut 500 (or "m") classes into 50 manageable collections with each collection modifiable/refactorable internally without impacting another collection. That's still over 22 thousand combinations per month.
It is not humanly possible to track so many potential feature/class/person impacts on k (software characteristics) and select best combinations (plan ahead). Some sci-fi style hard-AI tool would have to do it. Otherwise, we're just throwing "hail mary": maybe it will work well enough in enough cases that the project will not get delayed. Or maybe it won't.
Background info to give you better idea of the system in question. This has to be sort of "anonymized" for reason of confidentiality.
Data flow
This one is fortunately pretty simple: agents <-> server, with server consisting of following major subsystems:
- agent comm
- inventory
- data aggregation
- report generation
(there are of course many supporting subsystems, from server config console to web ui to REST api but I'm not including those here)
Implementation
Technologies: C, Java, SQL. Environments range from hundreds to tens of thousands of agents (the biggest environment: about 100K of agents).
Code size: Several thousand Java files organized in approx 500 packages. About 900 KLOC total so far (not counting the agent). So it's not that big, but it's complex in the sense of involving many problem domains.
Aggregation is the most complex and resource intensive part of the app. We're using plain SQL for performance, but even then aggregation can't finish in 24 hrs for large environments (right when the next round of aggregation should start, i.e. next day). Miscalculation can be sort of serious, since it can easily cost a customer dozens of $M.
Here is where language sort of breaks: the major problem with our project is not that it is either big in KLOC/FP or "complex". Data flow architecture is simple. The problem is that the project is more like a "bowl of paperclips" more than a bunch of independent features - you can't touch any feature or issue without impacting most of other aspects, subsystems and issues.
– LetMeSOThat4U Dec 20 '12 at 16:02