System scope and scale
If all of these items are documented and/or diagrammed, if it's done thoroughly and accurately, they will, collectively, provide a holistic view of the total scope of a system:
- Every system component role should be identified in the Logical Architecture
- Where each of those components actually resides should be identified in the Physical Architecture
- Every use case (and hopefully every business process) that the system is supposed to implement should be identified in the use-case documentation, and any of the underlying processes that aren't painfully obvious should have at least a rough happy-path breakdown
- Every chunk of data that moves from one place or process to another should be identified in the Data Flow, with enough detail to collate a fairly complete picture of the structure of that data as well
- The formats and protocols that govern how that data move about, at least for any part of the system that involves more than just passing system objects from one function or method in the code-base to another, should be identified
- A fair idea of where and how those data are persisted should be discernible from the Logical, and maybe Physical, architectures
The only significant missing piece that hasn't been noted is the scale of the system. If the scope is how many types of object are being worked with or are moving around in the system, the scale would be how many of those objects exist, either at rest (stored in a database, for example) or actively at any given time.
Scale can be hard to anticipate with any accuracy, depending on the context of the system. Systems such as the hypothetical refueling tracker and order-processing/fulfillment/shipping system that have been used for illustration are generally going to be more predictable:
The number of users is going to be reasonably predictable: All employees and all customers pretty much covers the maximum user base for both of those
The number of objects being used is also going to be reasonably predictable: The delivery company only has so many trucks, after all, and the company running the order system, though probably less predictable, will still have a fair idea of how many orders are in flight at most, and at typical levels
When a system or application enters a user space such as the web, though, there is potential for radical variation, even over very short periods of time. In either case, some sort of planning around expected and maximum/worst-case scale should be undertaken. That planning may have significant design and implementation effects – fetching and working with a dozen records at a time out of a few hundred or thousand total records doesn't require nearly the attention to efficiency that those same twelve records out of several million or billion would, just as a basic example – on how code might be written. If planning for even potential massive surges in use involves being able to scale out to multiple servers, or load-balance requests, that might also have an effect on the code, though probably at a higher, interprocess-communication level.