Widget HTML Atas

Performance By Design Computer Capacity Planning By Example Pdf Download

Capacity Requirement

Quality Software Development

William G. Bail , in Advances in Computers, 2006

4.1.4 Capacity Requirements

Capacity requirements deal with the amount of information or services that can be handled by the component or system. These are important since they establish the way that the system can be used. If the capacity needs are not clearly defined, developers might underestimate what is needed and the users will find the system unusable. On the other hand, developers might provide too many resources, making the system expensive and resource-intensive. Examples include:

"The system shall be able to support 25 simultaneous users."

"The system shall be able to manage up to 20,000 employee records."

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0065245805660032

Functional Analysis and Allocation Practice

Richard F. Schmidt , in Software Engineering, 2013

11.2.9 Identify data retention capacity requirements

The data storage capacity requirements for long-term data retention records must be specified. The operational or business model should be evaluated to determine the anticipated most-excessive amount of data records that would need to be supported for a given time period. Operational projections should be used to determine the periodic demand for data storage capacity. Factors that must be considered when preforming capacity planning are location of data storage facilities, data record retention duration, recovery of deleted data record storage space, and periodic demand for new data record creation. The data retention capacity requirements will affect the software interaction with a database management system, as well as directly impact the configuration of the computing environment.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124077683000112

An Introduction to Systems Auditing

Craig Wright , in The IT Regulatory and Standards Compliance Handbook, 2008

System Management Controls

Capacity Planning

Projections for future computing and communications capacity requirements must be made to ensure that adequate capacity is available. The likely lead times for equipment upgrades or replacement must be taken into account.

Makeshift solutions to capacity problems could contribute to security flaws because of compatibility limitations or hurried implementations leaving gaps in the Security measures.

System Acceptance

The following items need to be considered when performing acceptance testing:

Performance and capacity requirements for computing platforms and communications systems

Preparation of error recovery and restart procedures

Preparation and testing of routine operating procedures to defined standards

Evidence that new or modified systems will not adversely affect existing systems

Testing to prove that the new or modified system operates according to the specifications and business unit sign-off prior to production implementation

Correct functioning of security and application control processes

Training in operating and using the new or modified system

Configuration Management

Configuration changes are those changes to the baseline hardware, operating system and application software in operation within the host system(s):

All proposed configuration changes must maintain or enhance the level of system security and shall not, in any way, degrade existing levels of system security safeguards.

All configuration changes to the host server must be recorded using a change control mechanism.

IT Change Control

Formal responsibilities and accountabilities must be established for IT change control, particularly in relation to:

Identification and recording of all changes to facilities and systems supporting production

Assessment of the potential impacts of such changes such as performance and security impacts and compliance capability

Completion of an approval process for changes

Communication of changes to all relevant business and IT personnel

Implementation of responsibilities, accountabilities and processes for backing out of unsuccessful changes

Implementation of security changes or upgrades

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978159749266900014X

Production scheduling, management and control

In Practical E-Manufacturing and Supply Chain Management, 2004

9.2.1 Enterprise scheduling

Most ERP/MRP systems have a capacity requirements planning (CRP) module, to indicate capacity overloads. This does not take into account sequencing of the work and gives no facilities for taking account of late or new orders. This is why many MRP suppliers have added some form of graphical and interactive tool to replace CRP, either by offering tools as an integrated part of their own package or developing their own. Finite capacity scheduling (FCS) produces better sequencing and achievable schedules, which allows companies to become more agile and responsive and still maintain customer service levels.

The MRP systems take customer orders and break them down into individual parts using a bill of materials (BOM), then aggregate the requirements for the parts into works and purchase orders, but the relationship between a works or purchase order for a part and the customer orders is lost during this process. To provide APS functionality, the scheduling system should understand these relationships in order to know how to sequence the works/purchase orders that have to be made. The APS systems often duplicate MRP functionality, re-blowing the BOM to understand the links between the works/purchase orders.

It also duplicates ERP functionality including forecasting systems, distribution software, etc. The latest material control modules of APS include the concept of dynamic material control (DMC) whereby the linking between works/purchase orders is carried out during the scheduling run. During re-scheduling the planner can decide whether to keep the existing links or re-allocate the materials because a problem has occurred somewhere in the production facility. As a by-product this gives traceability of which materials and components go into which customer orders.

To extend the APS and DMC principles to the whole of the supply chain, the schedules of the manufacturer and those of their suppliers and subcontractors need to be synchronized. Using a single high-level SCM model for the entire supply chain is not accurate enough to take into account the current and future workloads of the entire team, since much of the work of suppliers and subcontractors is not related to the manufacturer. The solution to this problem is to make the APS system of each member of the supply chain available to all the other members.

Supply-chain scheduling (SCS) provides this functionality and it also provides capable-to-promise (CTP) or make-to-order functionality, which is suitable for subcontractors, while the DMC module provides true available-to-promise (ATP) functions, taking existing stocks into account.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750662727500129

Flow Analysis

James D. McCabe , in Network Analysis, Architecture, and Design (3), 2007

4.8.1 Flowspec Algorithm

Flowspecs are used to combine performance requirements of multiple applications for a composite flow or multiple flows in a section of a path. The flowspec algorithm is a mechanism to combine these performance requirements (capacity, delay, and RMA) for flows in such a way as to describe the optimal composite performance for that flow or group of flows.

The flowspec algorithm applies the following rules:

1.

Best-effort flows consist only of capacity requirements; therefore, only capacities are used in best-effort calculations.

2.

For flows with predictable requirements we use all available performance requirements (capacity, delay, and RMA) in the calculations. Performance requirements are combined for each characteristic so as to maximize the overall performance of each flow.

3.

For flows with guaranteed requirements we list each individual requirement (as an individual flow), not combining them with other requirements.

The first condition is based on the nature of best-effort traffic—that it is unpredictable and unreliable. RMA and delay requirements cannot be supported in a best-effort environment. The best that can be expected is that capacity requirements may be supported through capacity planning (also known as traffic engineering) or by over-engineering the capacity of the network to support these requirements.

The second condition is at the heart of the flowspec—that for each performance characteristic, capacity, delay, and RMA, the requirements are combined to maximize the performance of the flow or group of flows. How the requirements are combined to maximize performance is discussed later in this chapter.

The third condition is based on the nature of guaranteed requirements. Since flows with such requirements must be supported end-to-end, their requirements are kept separate so that we can identify them in the network architecture and design.

When a one-part flowspec is developed (for flows with best-effort requirements), then capacities of the flows are combined. There should be no RMA or delay requirements for these flows. Capacities are added together, forming a total best-effort capacity (CBE), as shown in Figure 4.37.

Figure 4.37. A One-Part Flow Specification

A two-part flowspec builds on a one-part flowspec, adding predictable capacities, delay, and RMA. When a two-part flowspec is developed (for flows with best-effort and predictable requirements), the best-effort flows are combined in the same way as for the one-part flowspec. For the predictable requirements, the capacities are added together as with the best-effort flows, so that the flowspec has a total capacity for best-effort flows (CBE) and another capacity for predictable flows (CP). For the delay and RMA requirements for predictable flows, the goal is to maximize each requirement. For delay, the minimum delay (i.e., the highest-performance delay) of all of the delay requirements is taken as the predictable delay (DP) for the flowspec, and the maximum RMA (i.e., the highest-performance RMA) of all of the RMA requirements is taken as the predictable RMA (RP) for the flowspec. Figure 4.38 illustrates this for a two-part flowspec.

Figure 4.38. A Two-Part Flow Specification

A multi-part flowspec is the most complex of the flowspecs, building on a two-part flowspec to add guaranteed requirements. Best-effort capacity, along with predictable capacity, delay, and RMA, is generated in the same fashion as for a two-part flowspec, and each set (i) of guaranteed performance requirements is added individually (shown as Ci, Ri, Di) to the flowspec, as shown in Figure 4.39.

Figure 4.39. A Multi-Part Flow Specification

Sets of guaranteed performance requirements (Ci, Ri, Di) are listed individually in the flowspec to show that they will be supported as individual requirements and not grouped with other requirements. This is necessary for us to be able to fully support each guaranteed requirement throughout the network.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123704801500050

Scaling Packet Ethernet Services Using Seamless MPLS

Vinod Joseph , Srinivas Mulugu , in Network Convergence, 2014

Next Generation Mobile Transport Characteristics

The next generation mobile backhaul infrastructure has the following characteristics, which are discussed in the sections that follow:

High-capacity requirements from edge to core

Exponential increase in scale driven by LTE deployments

Support for multiple and mixed topologies

Seamless interworking with the mobile packet core

Transport of multiple services from all locations

High-Capacity Requirements from Edge to Core

The mobile landscape is changing with consumer behavior. Powerful new mobile devices, increasing use of mobile Internet access, and a growing range of data-hungry applications for music, video, gaming, and social networking are driving huge increases in data traffic. A recent study forecast that mobile data traffic is set to increase 18-fold globally between 2011 and 2016, as pictured in Figure 4.2 . These exploding bandwidth requirements are driving high-capacity requirements from the edge to the core with typical rates of 100Mbps per eNodeB, 1Gbps access, 10Gbps aggregation, and future 100Gbps core networks.

Figure 4.2.

Exponential Increase in Scale Driven by LTE Deployments

LTE will drive ubiquitous mobile broadband with its quantum leap in uplink and downlink transmission speeds.

In denser populations, the increased data rates delivered to each subscriber will force division of the cell capacity among fewer users. Because of this, cells must be much smaller than they are today.

Another factor to consider is the macro cell capacity. The spectrum allotted to mobile networks has been increasing over the years, roughly doubling over a five-year period. With advancements in radio technology, a corresponding increase in average macro cell efficiency has occurred over the same period. As a result, the macro cell capacity, which is a product of these two entities, will see a four-fold increase over a five-year period. This increase, however, is nowhere close to the projected 26-fold increase in mobile data (as stated above) and will force mobile operators to deploy a small-cell network architecture.

These two factors will force operators to adopt small cell architectures, resulting in an exponential increase in cell sites deployed in the network. In large networks covering large geographies, the scale is expected to be in the order of several tens of thousands to a few hundred thousands of LTE NodeBs and associated CSGs.

Support for Multiple and Mixed Topologies

Many options exist for physical topologies in the RAN transport network, with hub-and-spoke and ring being the most prevalent. Capacity requirements driven by subscriber density, CAPEX of deploying fiber in large geographies, and physical link redundancy considerations could lead to a combination of fiber and microwave rings in access, fiber rings, and hub-and-spoke in aggregation and core networks, and so on. The transport technology that implements the RAN backhaul must be independent of the physical topology, or a combination thereof, used in various layers of the network, and must cost-effectively scale to accommodate the explosive increase in bandwidth requirements imposed by the mobile growth.

Seamless Interworking with the Mobile Packet Core

As mentioned a bit earlier, the flattened all-IP LTE/EPC architecture is a significant departure from previous generations of mobile standards and should be an important consideration in designing the RAN backhaul for 4G mobile transport.

The 2G/3G hierarchical architecture consists of a logical hub-and-spoke connectivity between BSC/RNC and the BTS/NodeBs, as shown in Figure 4.3. This hierarchical architecture lent itself naturally to the circuit-switched paradigm of having point-to-point connectivity between the cell sites and controllers. However, the reach of the RAN backhaul was limited, in that it extended from the radio access network to the local aggregation/distribution location where the controllers are situated.

Figure 4.3.

In contrast, the flat LTE architecture does away with the hierarchy by getting rid of the intermediate controller, like the BSC/RNC, and lets the eNodeB communicate directly with the EPC gateways, as shown in Figure 4.3. It also does away with the point-to-point relationship of 2G/ 3G architectures and imposes multipoint connectivity requirements at the cell site. This multipoint transport requirement from the cell site not only applies to the LTE X2 interface, which introduces direct communication between eNodeBs requiring any-to-any mesh network connectivity, but also applies to the LTE S1 interface, which requires a one-to-many relationship between the eNodeB and multiple EPC gateways. While the serving gateways (SGWs) may be deployed in a distributed manner closer to the aggregation network, the MMEs are usually fewer in number and centrally located in the core. This extends the reach of the RAN backhaul from the cell site deep into the core network. Important consideration also needs to be given to SAE concepts like MME pooling and SGW pooling in the EPC that allow for geographic redundancy and load sharing. The RAN backhaul service model must provide for eNodeB association to multiple gateways in the pool and migration of eNodeB across pools without having to re-architect the underlying transport architecture.

Transport of Multiple Services from All Locations

LTE has to co-exist with other services on a common network infrastructure that could include:

Existing mobile services:

3G UMTS IP/ATM

2G GSM and SP WiFi in a mobile-only deployment

A myriad of other services:

Residential broadband triple play

Metro Ethernet forum (MEF) E-Line and E-LAN

L3VPN business services

RAN sharing, wireline wholesale in a converged mobile and wireline deployment

In these scenarios, the network has to not only support multiple services concurrently, but also support all these services across disparate endpoints. Typical examples are:

L3 transport for LTE and Internet high speed packet access (I-HSPA) controller-free architectures: from RAN to SAE gateways in the core network

L3 transport for 3G UMTS/IP: from RAN to BSC in the aggregation network

L2 transport for 2G GSM and 3G UMTS/ATM: from RAN to RNC/BSC in the aggregation network

L2 transport for residential wireline: from access to broadband network gateways (BNG) in the aggregation network

L3/L2 transport for business wireline: from access to remote access networks across the core network.

L2 transport for wireline wholesale: from access to retail wireline SP peering point

L3 transport for RAN sharing: from RAN to retail mobile SP peering point

The transport technology used in the RAN backhaul and the network architecture must be carefully engineered to be scalable and flexible enough to meet the requirements of various services being transported across a multitude of locations in the network.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123978776000047

System hierarchies and components

In Practical E-Manufacturing and Supply Chain Management, 2004

3.8.2 ERP functionality

Core Subsystems of an ERP System Key Functions of ERP Systems
Sales and marketing Finance/Accounting
Master scheduling Sales and distribution
Materials requirement planning Budgeting and planning
Capacity requirement planning Human resource and Personnel
Bill of materials Fixed assets
Purchasing Material management and inventory control
Shop floor control Master scheduling
Purchasing Work order management
Shop floor control Logistics and warehouse management
Accounts payable/receivable Purchasing/sourcing
Logistics

ERP systems provide the following to an enterprise.

Supply-chain visibility

The entire supply chain revolves around manufacturing; so optimizing the supply chain requires good information about it. Of the various applications of supply-chain solutions (SCS), available-to-promise, manufacturing planning and production scheduling are most closely tied to both plant operations and to ERP.

Plant decision support

The most frequently cited benefit area is improved decision support within the plant. Most of the information used in plant decision-making comes from the plant itself, not from ERP. However, most plant-level systems deal more with detailed data (temperatures, pressures, flow rates) instead of higher-level business information (pricing, shipment schedules, production orders) that is valuable in decision support.

Integrating plant data into ERP first requires some type of transformation of that data to be more meaningful – production orders into set points, flow rates into production totals, etc. The ERP integration is often the catalyst for automating that transformation.

Better data

Another important area is improved cost and financial accuracy. Improved accurate data may really be the result of improved consistency. Prior to integrating with ERP (refer Figure 3.16), it is quite common for companies to implement multiple, disparate systems for capturing similar information for different uses.

Figure 3.16. ERP/plant messages

Technological

ERP provides data of higher integrity, as disparate systems often cause poor quality of information. These systems are often also not integrated and hence the difficulty in data acquisition and capture. A lot of these systems may also be obsolete and unable to support organizational growth.

Operational

The integrated nature of ERP also assists organizations to sort out poor performance and very high cost structures. As information is more readily available throughout the organization, ERP assists in increasing the responsiveness to customers. It also supports the organizational strategy especially in the light of globalization. Enterprise resource planning also assists in the standardization of complex processes and inconsistent business processes throughout the organization.

Benefits of integration

ERP systems are used to perform business functions that focus on the entire business's transactions. Benefits of an ERP system include:

Access to an expertise database

Automatic introduction of latest technologies

Better customer service, customer satisfaction

Better project management

Integration of the system across all departments in a company as well as across the enterprise that enables the enterprise to operate as a unit

Perform corporate activities in its functional areas

Automate business process which in terms improves overall business

Business development – new areas, products, services

Ability to face competition

IT, development and employment of new technology

Other software does not meet business needs

Legacy systems difficult to maintain, euro currency

Obsolete hardware/software difficult to maintain

An ERP system is an SCM enabler

Reduction of inventory, personnel, IT cost, procurement cost, etc.

Improvements of productivity, order-management, cash management, financial close cycle and revenue

It facilitates company-wide integrated information systems covering all functional areas, performs core corporate activities and increases customer service.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750662727500063

Secure Working Practices

David Watson , Andrew Jones , in Digital Forensics Processing and Procedures, 2013

12.8.1.1 Guidelines for System Acceptance

Acceptance criteria for new systems are:

all security assessments must have been performed, and security controls developed, tested, documented, and signed off by the Information Security Manager;

all performance and capacity requirements must be fulfilled;

all development problems must be successfully resolved;

testing proves there will be no adverse effect on existing live systems;

all specifications have been met;

the system can be supported by the Forensic Laboratory IT on a continuing basis (for example, via the Service Desk);

roll-back arrangements are in place in the event of the changes failing to function as intended (all roll-backs must be performed in accordance with the Forensic Laboratory change management procedures);

sign-off has been obtained from the key stakeholders (for example, the business unit, System Administrator(s), Application Owner, etc.);

error recovery and restart procedures are established, and contingency plans have been developed or updated;

system operating procedures have been tested;

users are educated in the use of the system, and the IT Department are trained to run the system correctly.

In addition, the following checks should be observed when accepting a new system:

old software, procedures, and documentation must be discontinued;

acceptance checks, release and configuration management processes, as defined in Chapter 7, Sections 7.4.4 and 7.4.5, respectively, must ensure that only tested and approved versions of software are accepted into the live environment;

responsibility must be transferred to system operators after installation is complete.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597497428000121

Method for the Enhancement of Buildability and Bending Resistance of Three-Dimensional-Printable Tailing Mortar

Zhijian Li , ... Guowei Ma , in 3D Concrete Printing Technology, 2019

8.4 Conclusions

This chapter investigated the structural capacity of components printed at favorable buildability situations. The paste age, VMA content, and curing method are considered to optimize the buildability of fresh paste and the mechanical strength of the hardened material to meet the structural capacity requirements and demands of the printed stricture. The following conclusions can be drawn from this study:

1.

The buildability of a proposed tailing material can be controlled by adjusting the paste age. The longer the paste age, the better the buildability. At a paste age of 45   minutes, the average layer thickness is 75   mm, accounting for 93.8% of the optimal designed value. A low-slump characteristic represents a well buildability. It is feasible to improve the buildability by adjusting the paste age.

2.

The flexural strength of specimens printed at a paste age of 45   minutes accounts for 46.1% of the mold-casted samples. From the CT identification, the weak-bonding interfaces are characterized by discontinuously distributed small voids along the boundary of the extruded filaments. The weak interface becomes more noticeable as the paste age increases. The inherent nature of layer delamination negatively influences the structural integrity and capacity of the printed models.

3.

Incorporating 1.5% viscous modifying agent can increase the flexural strength and fracture energy by 25% and 54.5%, respectively. The addition of VMA eliminates the influence of the interlayer delamination on the fracture path to a large extent. The flexural strength of material with 1.5% VMA measures 67% of the mold-casted ones. The flowability of fresh paste must be taken into account to meet the basic requirements of a desirable printability when a certain amount of VMA is adopted.

4.

The steam-curing method introduced in this study increased the strengths approximately four times from the original strength. Flexural strengths of 12.93   MPa can be achieved with this postprocessing method. The inherent nature of the layered structure becomes less distinct as components are cured. Heat curing at 90°C may not be an applicable means for the rapid manufacturing processing; however, it is a promising post-treatment method for enhancing mechanical performances.

This chapter investigated the structural integrity and bending resistance of 3D printed structures. It is applicable and beneficial to enhance the 3D-printed structures using the proposed methods. However, there are still certain mechanical mismatches between the printed and mold-cast specimens. The next step for research is to investigate how to reduce the weakening impact of layer delamination on the structural performance of components. Additionally, the mechanical anisotropy of the printed laminar structure needs further study as, currently, 3D-printed objects are either unreinforced, or reinforcement is applied manually; fiber-reinforced cement mixtures or fiber-reinforced polymers that show great potential to increase the ductility of the printed mortar should be developed. Further research will also be devoted to explore the frontiers of 3D printing and promote its effective application in real-life construction scenarios.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128154816000087

Requirements Analysis

James D. McCabe , in Network Analysis, Architecture, and Design (3), 2007

3.9.1 Comparing Application Requirements

Developing environment-specific thresholds and limits is based on comparing the various performance requirements of applications. Typically, one, two, or all of the performance characteristics (capacity, delay, and RMA) for the applications are plotted, and the plot is used to compare relative performance requirements and develop a threshold or limits for that characteristic.

For example, consider a plot of capacity requirements for applications in a network. These may be a subset of all the applications for that environment, such as the top five or six in order of performance or importance. Figure 3.24 shows the resulting plot. There is a cluster of applications in the capacity range 50   Kb/s—around 1   Mb/s, and then isolated applications at around 4   Mb/s and 6.5   Mb/s. We could choose to pick the top one or two applications and place a threshold between them and the rest of the applications. Most likely, the top two would be grouped together and a threshold placed around 2 to 3   Mb/s.

Figure 3.24. A Plot of Capacity Requirements with Possible Thresholds

As you can see from Figure 3.24, the capacity requirements of the applications are spread across a range of values. We could choose to develop a (environment-specific) performance limit for this group of applications, or a threshold between low and high performance, or both. Note that, for delay, an upper limit for high performance is actually the lowest delay value, not the highest value.

In this figure, we could estimate a couple of possible capacity thresholds. The most likely would be grouping Applications C and F, and placing a threshold around 2 to 3   Mb/s. This value is subjective and may need to be approved by the users or management. Sometimes the threshold or limit is obvious; at other times it may be difficult to determine. Consider, for example, the following plot of application capacities (Figure 3.25).

Figure 3.25. A Plot of Capacity Requirements with No Distinct Groupings

In this figure, the capacity requirements are also spread out over a range of values, but in this case there is no clear separation between low and high performance. When performance requirements are not clearly separated, you may not be able to develop a threshold.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123704801500049

Source: https://www.sciencedirect.com/topics/computer-science/capacity-requirement

Posted by: charlinetauteolie0188099.blogspot.com