Network Design Principles: A Structured Review for Design Engineers

Abstract

Network design involves making decisions within constraints, balancing factors like cost, reliability, scalability, and simplicity. This document reviews essential design principles with a practical engineering focus, using a “Hierarchy of Needs” to ensure foundational aspects such as connectivity and integrity (reliability and security) are addressed before higher-level features like automation. Principles are grouped into four domains: decision and constraint, architecture, process, and lifecycle. Trade-off matrices help designers weigh conflicting priorities and set requirements based on risk and operational needs, enabling consistent and rational choices across technical settings.

Introduction

Network design is fundamentally an exercise in decision making under constraints. Designers rarely face a single objective; instead, they must balance competing requirements such as cost, reliability, security, scalability, and operational simplicity. To guide these decisions, engineers rely on design principles, general rules and heuristics that capture lessons learned from decades of building and operating communication systems.

Design principles frequently conflict with one another. A design that maximizes redundancy may violate simplicity. A design optimized for flexibility may increase cost and operational complexity. A design that minimizes cost may reduce resilience or scalability. Because of these tensions, it is neither possible nor desirable to optimize all principles simultaneously.

Effective network design therefore requires more than knowing what each principle means; it requires understanding when to prioritize one principle over another. The role of the designer is not to apply every principle equally, but to make informed trade-offs based on requirements, constraints, and risk tolerance.

This document presents a set of network design principles organized around this conflict-driven perspective. Instead of treating principles as isolated concepts, the discussion emphasizes:

  • How principles interact and conflict
  • How requirements influence their priority
  • How structured decision methods can guide trade-offs

The goal is to move beyond definitions toward a practical framework for engineering judgment, helping designers translate abstract principles into real-world design decisions.

While this document emphasizes how principles interact and occasionally conflict, it remains fundamentally a structured review of the core design principles that guide professional network engineering practice.

Hierarchy of Needs

Network design principles frequently conflict in practice. A design that improves reliability may increase cost and complexity, while a design that maximizes simplicity may reduce flexibility or scalability. To make consistent decisions when such conflicts arise, designers need a prioritization model.

The Hierarchy of Needs provides this model. While inspired by Maslow’s psychological framework, this concept is synthesized from the functional “generations” of networking described by James D. McCabe (McCabe, 2007). By reinterpreting these historical stages as a tiered priority system, we can ensure that foundational capabilities are stabilized before advanced optimizations are pursued.

A practical hierarchy for modern network design can be summarized as follows:

  1. Connectivity: The network must fundamentally connect endpoints and systems.
  2. Integrity (Reliability & Security): Connectivity must be stable, resilient to failure, and protected against compromise.
  3. Interoperability: The network must function seamlessly across diverse technologies, vendors, and external domains.
  4. Service Delivery: The network must provide the specific performance, quality of service (QoS), and manageability required by applications.
  5. Autonomy (Self-Configuration): The network should adapt to changing requirements through automation.

This hierarchy acts as a decision guide: simplicity should not be prioritized over Integrity, and Autonomy should not be introduced if it reduces Interoperability or operational clarity. By satisfying these needs in order, designers avoid building sophisticated features on a fragile base.

The hierarchy does not replace other design principles; rather, it provides the context in which they should be applied. Designers should first ensure that foundational needs are met, then apply other principles to refine and optimize the design.

Trade-Off Matrix

Network design rarely allows all principles to be optimized simultaneously; therefore, designers must evaluate trade-offs based on requirements, constraints, and risk tolerance. The matrix below highlights typical conflicts and the factors that usually guide resolution.

Principle APrinciple BNature of ConflictTypical Resolution Driver
KISSRedundancyAdditional redundancy increases complexityAvailability requirements
Cost-BenefitFactor of SafetyHigher safety margins increase costRisk tolerance
FlexibilityConsistencyFlexible designs reduce standardizationOperational maturity
ModularityPerformance OptimizationModular boundaries may introduce latency or overheadPerformance requirements
AutomationInteroperabilityAutomation frameworks may depend on vendor featuresMulti-vendor constraints
ScalabilityUsabilityHighly scalable designs may be harder to operateOperational skill level
SegmentationManageabilityMore segments increase policy and configuration overheadIntegrity requirements

The matrix is not intended to prescribe specific solutions, but to help identify which requirement should dominate when conflicts arise. Designers should:

  1. Map requirements to the hierarchy of needs.
  2. Identify which principles are in tension.
  3. Use constraints (cost, risk, operational capability) to guide prioritization.

This approach reinforces that design principles are decision tools, not fixed rules.

Trade-off matrix applies to all subsequent principle categories and should be used whenever competing design objectives arise.

Decision and Constraint Principles

These principles influence what is feasible given requirements, risk, and resources.

80/20 Rule

The 80/20 is also known as the Pareto Principle. It refers to the work of Vilferdo Pareto that describes the relationship between the elements that make up a whole and the influence that each element exert on the whole.

The 80/20 rule is observed in all large systems, including data networks. Although the exact percentage may vary, the rule can be observed in many situations in the design and deployment of data networks, such as:

  • 80% of traffic in the network is generated by 20% of the applications.
  • 80% of the cost of the network comes from 20% of its components.
  • 80% of the design effort is dedicated to 20% of the design requirements.

Applying the 80/20 rule is useful in allocating resources during the design process to target areas in design that result in substantial gains. For instance, the designers should focus on analyzing the performance requirements of the most critical applications. Effort should be spent also on lowering the cost of the most expensive components in the design. When deploying a new network or upgrading an existing one, priority should be given to implementing the most used services.

When resources are limited, focusing on aspects of the network design beyond the most critical 20% yields diminishing returns and should be avoided.

Cost-Benefit

A design can be evaluated using the cost-benefit principle, and it will be considered good if the benefits outweigh the cost. From a designer perspective, a cost-benefit analysis can assess the financial returns associated with adding a design feature. The designer should also evaluate benefits and costs from the user’s perspective so that design decisions are not made based on financial cost alone. A successful design prioritizes the features that have high benefits and low costs.

Cost-benefit analysis can be used to decide, for example, if the cost of adding redundancy to the network outweigh the potential financial losses from an unexpected failure. From user’s perspective, adding more security constraints to the network should be evaluated against user’s ease of access. Offering high Internet bandwidth may enable remote work and collaboration among users and save money spent on travel; therefore, it can be considered an example of low cost and high benefit feature.

It is important in cost-analysis evaluation to identify the costs and the benefits correctly. Using the latest networking technology in a new design may be considered important by the designer but it adds little value to the users if there is not perceived improvement in performance. Similarly, adding unnecessary redundancy may increase cost and complexity of the network without significant reliability improvement.

Factor of Safety

Creating exact performance specifications for a new network design is a challenging task. There can be many unknowns that force the designer to make assumptions and guesses. The level of uncertainty also grows as the number of design variables increases. Engineers use factors of safety to compensate for any errors that are caused by the unknowns (Some fields of engineering use well-known safety factors).

In network design, adding a safety factor could mean doubling the amount of the WAN bandwidth needed between sites to accommodate unexpected traffic demands, or increasing the link budget for an optical link to offset attenuation caused by unexpected/unknown splicing, or using a firewall with higher VPN throughput than initially estimated.

The use of safety factor comes at additional cost; therefore, the more accurate the performance estimates are the smaller the factor needs to be. The safety factor is also higher in the design of new networks, when little is known about how the network will be perform in production. Upgrades and or designs of similar types of networks need lower safety factor because of the greater knowledge gathered about the network’s performance.

A safety factor reduces the probability of failure by exceeding the design specifications at the expense of higher cost. The factor of safety (or uncertainty) should not be confused with scalability provisions. The former protects against design errors (under estimation), uncertainty, and variability, whereas the latter is used to accommodate future growth.

Architectural Structure Principles

These define how the network is organized technically.

Patterns

Designers in many fields (architecture, software, and engineering, for example) rarely approach a design problem by re-inventing the wheel. Instead, designers typically apply design patterns. Design patterns are typical solutions to common problems in design. A pattern is a standard blueprint that can be used to solve a particular design problem. However, since design problems are rarely identical, the initial design pattern may be customized and modified to solve the particular problem at hand.

There are several patterns in network design. The most common is the hierarchical model that divides a network into two or three discrete tiers. Each tier, provides specific functions within the overall network, such as connectivity to end nodes or bulk data transport. The hierarchical model applies mostly to LANs, but it often used to solve other design problems due to its popularization by the vendor that created it.

A spine-leaf pattern is used in data centre designs. The pattern includes two (or more) tiers of network devices forming a spine and leaf. The leaf devices connect to end nodes (servers) to aggregate and pass traffic to the spine devices. Although the spine-leaf topology may resemble the hierarchical model, there many notable differences, including avoiding bandwidth oversubscription by ensuring equal bandwidth at all tiers and avoiding the use of the Spanning-Tree Protocol (STP) to facilitate multi-path forwarding of data traffic.

Other patterns may not be as common due to their specialized use. Service providers’ networks are constructed as a partial mesh topologies connecting several Point-of-Presence (POP) locations (which could be data centers). A centralized tree topology is used often to deliver broadband network services to residential and business customers. The tree is constructed by connecting customers to an aggregation point (e.g. curb node), which in turn is connected to another aggregation point until all traffic is aggregated at the service provider’s head end.

Modularity

Modularity is a fundamental architectural design principle in which a network is divided into multiple functional components, with each component serving a specific role. In practice, modules may represent different functions—such as enterprise LAN, WAN, or data center—or replicate the same function across multiple locations, such as access networks in different buildings.

Modularity facilitates scalability by allowing individual functions to be expanded, redesigned, or replicated without affecting the entire system. It also improves fault isolation by limiting the propagation of failures or security breaches between modules. In large networks, modular design improves manageability by organizing the system along clear functional boundaries.

Modules are typically created by limiting the propagation of control and data-plane information between functional areas. These boundaries provide locations where information can be hidden through filtering or aggregation. Examples include stopping Layer-2 broadcasts at routing boundaries, route summarization and default routing, network address translation (NAT), and policy filtering. Information hiding reduces the amount of state that must be maintained across the network and improves scalability (White, 2014)(White, 2018).

Traffic between modules must cross defined boundaries. These boundaries provide natural control points for implementing security policies, Quality of Service (QoS), monitoring, and other operational functions.

Modularity describes the structure of the network architecture. The related principle of divide-and-conquer, discussed later, describes how the design process itself can be decomposed.

Because redundancy increases complexity, it must be balanced carefully against the KISS principle discussed earlier.

Segmentation

Segmentation is an architectural technique used to divide a network into smaller functional or logical domains in order to improve scalability, security, and fault isolation. By limiting how traffic and control information propagate across the network, segmentation reduces operational complexity and prevents localized failures from affecting the entire system.

Networks can be segmented along several dimensions, including function (access, distribution, core), geography (campus, branch, data center), security level (trusted vs. untrusted zones), or service type. Common implementation mechanisms include routing boundaries, VLANs, VRFs, and filtering policies.

Segmentation often serves as a mechanism for enforcing modular architectural boundaries, but it may also be applied within a module to create security or operational domains.

Scalability

Scalability is a system’s ability to adjust performance in response to changing demands, such as the number of users, end nodes, bandwidth, or service demand. Networks must scale easily to support business needs and adapt not only to growth but also to reductions in demand.

Scalability is a key design principle in networks, influenced by topology, protocols, and network components. For example, hub-and-spoke topologies are generally easier to scale than full-mesh, and some routing protocols handle thousands of routers better than others. Modularity is often used to support scalability in network design.
Scalability is accomplished using two approaches:

Scaling out/in (horizontally): Scaling horizontally means adjusting network resources by adding or removing components with similar functionality, such as switches, access points, or gateways.

Scaling up/down (vertically): Scaling vertically means upgrading or downgrading individual devices or links, such as increasing WAN bandwidth or enhancing a firewall for more VPN sessions.

Scaling out is often easier and faster, with less downtime, but can add complexity to network management. However, this may not always apply to WAN links.

Design effort does not scale linearly with network size. Factors like power, heat, and cabling may be minor in small networks but become significant in large environments such as airports.

Complexity and Resilience Trade-offs

These principles frequently conflict and should be presented together to emphasize trade-offs.

KISS

Keep it simple, stupid (KISS) is a design principle that requires a system’s design to be as simple as possible. Complex design may be expensive to build and maintain. Also, unnecessary features or components increase the potential for failures.

From a management perspective, network designs should be simple enough for the management teams to manage and repair easily if a failure occurs. If the management team cannot understand how the network works, it is likely that it will not be managed properly.

Among the reasons for creating a complex designs is the scope creep. This refers to broadening the scope of the design project beyond initial requirements due to the customer demanding more features or changes. Other reasons for a design to be complex may be the result of adding unnecessary redundancies or using complex protocols in simple topologies .

The phrase “keep it simple, stupid” is attributed to Kelly Johnson, who was the lead engineer at the Lockheed Skunk Works. However, the principle is stated in many other forms, such as Occam’s Razor principle, which simply states that given a choice between functionally equivalent designs, the simplest design should be selected.

Redundancy

Redundancy is essential for resilient communications networks, as it prevents single points of failure by reducing the chance that two components fail simultaneously. According to Lidwell (2010), redundancy types include:

  • Diverse redundancy: Uses different component types (e.g., fibre and wireless links or separate routes for fibre) to prevent failures from a common cause but is complex to maintain.
  • Homogenous redundancy: Involves multiple identical components (like multiple firewalls), making implementation simple but susceptible to simultaneous failure from a shared cause, such as a DoS attack.
  • Active redundancy: Redundant components run at all times.
    • Active/Active: All devices are active and may share loads; if one fails, others continue without disruption, provided capacity suffices.
    • Active/Standby (Hot Standby): Backup devices monitor primaries and take over automatically on failure, requiring minimal human intervention and short MTTR.
  • Passive redundancy: Backups activate only after a failure (cold standby), needing human action and longer MTTR.

These redundancy forms can be combined based on system importance and expected failures. Both redundancy and safety factors are crucial engineering methods to ensure integrity.

Flexibility-Usability Trade-off

Flexible systems have the ability to perform more functions, but they perform these functions less efficiently than specialized systems. As the flexibility of a designed system increases, its usability decreases. Flexibility also comes at the expense of more complexity in the design and higher cost; therefore, a trade-off between flexibility and usability is needed.

Flexibility in network design refers to the ability to adapt the network resources to accommodate a large set of requirements, or adapt to change in requirements. Software Defined Networking, Network Virtualization and Network Function Virtualization technologies have been created to provide more flexibility in network infrastructure, especially in the design of Cloud infrastructures where application behavior and traffic flows are not known in advance and are hard to predict.

Alternatively, network designs that are based on understanding of the user and applications behavior can deliver optimal performance using simple topology and device configuration. This approach is suited for networks that has limited set of applications and well-understood traffic patterns that remains stable for a long time. The design of these networks can follow the design process mentioned earlier.

Flexibility is needed for the former (dynamic) type of networks but it requires layers of abstractions and virtualization and higher level of skills to build and operate, which translates into higher costs. The latter (static) type of networks cannot adapt to change that deviate too far from the initial design specifications, but they can be built and operated with much less effort and expense.

Design Process Principles

These describe how the design is produced, not the structure of the network itself.

Design Process

A common approach to network design is to use a top-down methodology, which starts by understanding the services required by users and applications before determining the topology and selecting components and protocols. The process consists of several key phases:

Gathering user requirements: Requirements are collected from users and stakeholders through meetings, surveys, and communications like requests for proposals. Upgrades may use current specifications as a basis. Requirements are classified by user, application, host, and network. Designers should also consider business and external factors, such as decision processes, vendor preferences, funding, and timelines. Experience, market conditions, technology trends, and governmental influences shape the design.

Establishing performance specifications: Information about applications, user locations, usage patterns, and host capabilities helps estimate data flows. Reliability, latency, and security needs are based on application importance. Infrastructure specifications like power, cabling, and space are also considered. This phase results in performance criteria and a map of major network data flows.

Problem Solving: This stage identifies major challenges and solves them to meet specifications efficiently. Topology designs are based on expected data flows. When data flows cannot be predicted, networks may use established patterns. Heuristics assist in designing backbone and access network topologies. Multiple solutions are compared to find the best fit. Manageability and security strategies are integrated, and high-level decisions are made, including technology, protocols, zoning, and routing. Large projects can systematically evaluate alternatives with tools such as a Decision Matrix, assigning weights to design criteria based on priority.

Finalizing Details: The final stage finalizes details like IP addressing, VLAN assignments, optical and wireless channel allocations, equipment selection, and bill-of-materials. Omitting details can make the design ineffective. Simulations, prototypes, and proof-of-concept (PoC) deployments validate complex designs and prepare for larger implementations.

Design principles guide decisions throughout the design process. After requirements are identified, architectural principles such as modularity and segmentation help structure candidate designs, while constraint principles such as cost-benefit and factor of safety help evaluate alternatives. When conflicts arise, the hierarchy of needs and the trade-off matrix provide a framework for prioritization. Operational principles such as consistency, observability, and automation ensure that the final design remains manageable throughout its lifecycle.

Divide-and-Conquer

The divide-and-conquer principle applies to the design workflow rather than the network architecture. Complex network designs are difficult to develop as a single task; therefore, the design process should be decomposed into smaller and more manageable components.

This decomposition may follow functional areas (LAN, WAN, data center), technical domains (routing, security, wireless), or design stages (requirements, architecture, and implementation). Breaking the design effort into smaller tasks reduces cognitive complexity and allows multiple designers or teams to work concurrently.

Unlike modularity or segmentation, which describe structural properties of the network, divide-and-conquer describes how designers organize the engineering process.

Collaborative Design

Significant engineering designs are rarely the result of a single person’s effort. Because today’s networks are complex and involve a wide range of technologies, it is nearly impossible for one individual to be an expert in every necessary domain. Therefore, the design process should involve a multidisciplinary team of experts and relevant stakeholders.

This collaborative approach offers several key advantages:

  • Success in Complexity: Committees are more effective when requirements are complex, technologies are diverse, and decisions are irreversible.
  • Concurrent Progress: Complex networks can be divided into components, allowing different teams to work on design tasks simultaneously rather than sequentially.
  • Stakeholder Buy-In: Regular engagement with clients and users ensures their requirements are met and helps secure support throughout the design lifecycle.

For designers working alone, it remains critical to solicit feedback from colleagues and maintain consistent communication with the client to validate the design’s progress.

Operational and Lifecycle Design Principles

These reflect how networks are operated after deployment.

Autonomy (Automation and Self-Configuration)

Modern networks increasingly rely on automation to improve consistency, scalability, and operational efficiency. Automation refers to the use of software tools and programmable interfaces to deploy, configure, and manage network components with minimal manual intervention.

Self-configuration extends this concept by enabling systems to adapt automatically to changes in topology, demand, or policy. Examples include automated provisioning, dynamic configuration templates, and controller-based architectures.

While automation reduces operational effort and human error, it also introduces abstraction layers that increase system complexity. For this reason, automation should be applied after foundational requirements, such as connectivity, reliability, and interoperability, are satisfied.

Automation also reinforces the consistency principle by enabling standardized configuration and policy enforcement across modules.

Both automation and self-configuration are tools to achieve autonomy where a network uses self-configuration, self-optimization, and self-healing to maintain a desired state.

Consistency

The consistency principle states that systems are more usable and learnable when similar parts are expressed in similar ways (Lidwell, 2010). Consistency in network design can be applied to appearance (e.g. network documentation) or to functionality (e.g. device configuration). Internal consistency refers to consistency with other components in the system (e.g. DHCP allocates the same address range in all subnets). External consistency refers to consistency with outside elements (e.g. VLAN100 has the same functions in all branches of the organization).

The consistency principle can be applied in network design in many ways:

  • Use templates in device configuration to reduce human errors and reduce the effort of management and troubleshooting.
  • Use consistent firewall policies in all firewalls.
  • Apply consistency in designing IP addressing scheme, VLAN to IP addresses mapping, and component labeling conventions.
  • Apply consistency in network documentation (symbols, icons, drawing sizes, etc.)

Consistency should be used without compromising functionality. Existing design standards and best practices should be followed whenever possible.

Observability

As networks grow in complexity, the traditional goal of manageability must evolve into observability. While manageability often focuses on the basic status of components—such as whether a device is functioning or has failed—observability is the ability to understand the internal state of the entire network by analyzing the data it generates.

This principle extends the phase of establishing performance specifications by ensuring the network is transparent from the outset:

  • Telemetry and Data Flows: Observability relies on granular telemetry rather than simple “up/down” checks. It maps all major data flows to provide a clear picture of network performance in real-time.
  • Operational Clarity: If a management team cannot understand how the network works or why it is behaving a certain way, it cannot be managed properly. High observability provides the visibility required to simplify troubleshooting in complex environments.
  • Proactive Problem Solving: By monitoring usage patterns, designers can identify and solve challenges in an optimal manner before they impact the user experience.

Sustainability and Efficiency

In large-scale designs, factors such as power, heat, and cabling consume a significant portion of the design effort. Modern network design increasingly treats these considerations as part of a broader sustainability and efficiency objective.

  • Resource Optimization: Applying the 80/20 rule, designers should focus on reducing the energy consumption of the 20% of components that typically generate 80% of the cost and heat.
  • Scalable Efficiency: Networks should not only scale up to meet demand but also scale down gracefully. This includes using technologies that reduce power consumption during low-traffic periods to meet performance specifications more efficiently.
  • Life-cycle Planning: A successful design evaluates the cost-benefit of hardware, considering longevity, efficiency, and long-term operational impact.

This perspective aligns with the 80/20 rule, which encourages focusing optimization efforts on the components that contribute most significantly to cost and resource consumption.

Design for Security (Zero Trust Architecture)

While security has always been a significant need for the network, the traditional “perimeter” model is often insufficient for modern, diverse data networks. Zero Trust is a design principle that assumes no part of the network is inherently “safe,” requiring continuous verification.

  • Micro-segmentation: This applies the modularity principle at a granular level. By creating small, isolated zones, a designer can isolate security breaches so they do not propagate to other parts of the network.
  • Policy Enforcement: Zero Trust leverages the choke points between modules to implement strict security policies, filtering, and identity-based access.
  • Application-Centric Security: Following the top-down methodology, security is derived from understanding the nature of applications and their importance to the user.

While specific security policies are informed by application needs (Level 4 in the Hierarchy of Needs), zero trust principles must be implemented early on (at Level 2) to ensure the network is protected regardless of the traffic type.

Case Studies

Here are some scenarios that illustrate conflicts among some principles.

ScenarioContext & ChallengeDesign DecisionEngineering Trade-offResolution Driver
Data Center SegmentationA data center hosts applications with varying security requirements; a flat Layer-2 design is simple but increases the risk of lateral movement.Introduce segmentation utilizing VRFs and firewall policies.Increased configuration complexity is accepted to improve security isolation.Security Requirements
Modularity vs. PerformanceA campus network is divided into multiple routing layers for scalability, but testing reveals slight latency increases at forwarding boundaries.Maintain the modular structure while focusing on optimizing routing paths.Minor performance overhead is accepted to preserve long-term scalability and manageability.Long-term Scalability Requirements
Branch Office RedundancyA small branch needs cloud connectivity on a limited budget; options include dual routers or a single router with dual uplinks.Implement a single router configured with dual uplinks.Redundancy is partially reduced to maintain simplicity and cost efficiency.Cost-Benefit Constraints
Automation vs. InteroperabilityAn automation framework relies on vendor-specific APIs, but the organization operates a multi-vendor environment.Utilize standards-based interfaces where possible and limit vendor-specific automation to isolated domains.Reduced automation efficiency is accepted to improve multi-vendor interoperability.Multi-vendor Constraints
Flexibility vs. ConsistencyMultiple engineering teams request customized configurations for similar network segments to support specialized applications.Enforce standardized configuration templates with strictly controlled exceptions.Reduced flexibility is accepted to improve operational consistency.Operational Maintainability

Conclusion

Network design principles provide a durable framework for engineering judgment. Rooted in decades of experience, they guide designers in structuring architectures, evaluating constraints, and planning for scalability, resilience, and operational sustainability. Their value lies not in rigid application, but in offering a shared conceptual foundation for reasoning about complex systems.

While principles may conflict in practice, such tensions are inherent to engineering under constraints. The hierarchy of needs and trade-off matrix presented in this document are intended to support prioritization when competing objectives arise. They complement, rather than replace, professional experience and contextual understanding.

By organizing principles into decision, architectural, process, and lifecycle categories, this review emphasizes that effective network design integrates structure, evaluation, and long-term operation. Understanding both the individual principles and their interactions enables designers to make consistent, defensible decisions across diverse technical and organizational environments.

References

Lidwell, William, Kritina Holden, and Jill Butler. 2010. Universal Principles of Design: 125 Ways to Enhance Usability, Influence Perception, Increase Appeal, Make Better Design Decisions, and Teach Through Design. Rockport Publishers, Inc.

McCabe, James D. 2007. Network Analysis, Architecture and Design. 3rd ed. Morgan Kaufmann.

White, Russ, and Ethan Banks. 2018. Computer Networking Problems and Solutions: An Innovative Approach to Building Resilient, Modern Networks. 1st ed. Addison-Wesley Professional.

White, Russ, and Denise Donohue. 2014. Art of Network Architecture, the: Business-Driven Design. 1st ed. Cisco Press.