A Good Architecture Is All About Probability - Or It Is Sufficient To Be Good Enough 📎
If you managed to create a perfect architecture, you probably missed your customers expectations or at least unnecessary burned some money. No customer pays you to build perfect architectures - it is sufficient to be good enough. Every application consists of an important, domain specific kernel and some supportive, less interesting services like master data management or configuration (usually they are not the primary value for most applications). All domain specific logic is the added value for your customer, everything else has to exist but is less important, and could be so developed more efficiently with less effort. The architectural decisions are influenced by non functional requirements like scalability, performance, testability, flexibility, maintainability and dozen other "ilities".
Usually the customer implicitly expects high fullfillment of all non functional requirements, which is impossible in practice - non functional requirements influence each other. E.g. layering negatively influences the performance, scalability might hit the performance as well, modularization can increase the overall complexity. Nothing comes for free. It is suboptimal to apply the same strategy to all subsystems (like domain and supportive services), because e.g. multiple layers for CRUD use cases do not only increase the complexity, hit the performance, but even degrade the maintainability. "Hacked", monolithic domain logic is even worse. The focus on a particular non-functional requirement, like e.g. scalability can have huge impact and has to be well-thought-out. The impact of scalability is statelessness and so rather procedural programming model which not only increases the complexity, but even obfuscate the domain logic.
A reasonable architecture does not try to realize all subsystems of an application perfectly, rather than recommend pragmatic solutions for a given problem. There is another important factor: the likelihood for a certain event. E.g. DAOs were originally intended to abstract from different data stores, but what if your database will likely live longer than your application? Is it really beneficial to be totally client independent knowing that it will be always a web client, flash or iPhone application? How likely is change in a certain part of your system?
Unfortunately to be able to estimate the probability of an event you will need domain knowledge, and some experience.
The even more important question is: "What happens in worst case?". How long will it take to introduce another type of UI, replace the database or switch the application server. If this can happen in a reasonable amount of time its ok. Whether the amount of time is reaonable or not should be decided by the customer not the architect :-)...
Many J2EE architectures were entirely exaggereted. The were intended for all, even very uncertain, cases. The result were many, dead, layers with lot of transformations and indirections. This introduced additional complexity and obfuscated the actual business logic and missed the point. The problem were generic, stereotypical architectures, which were developed once and applied to every possible use case. Even a guestbook was developed with at least 15 layers :-). So keep it small, keep it simple, and focus on the essential cabatilities of your application.
[In "Real World Java EE Patterns" I described pragmatic Java EE architectures with a minimal set of patterns]