First-Come, First-Served Scheduling6 Benefits and Challenges Presented by FIFO6 Round robin scheduling7 Benefits and Challenges Presented by Round Robin Scheduling7 Conclusion8 OS Concurrency Mechanism8 Real-Time, Distributed, and Embedded Environments8 Adapting Distributed Real-time and Embedded Pub/Sub Middleware for Cloud Computing Environment. 9 Cloud computing10 Configuring an enterprise DRE pub/sub system in a cloud environment11 Concurrency control mechanisms in handling communications and synchronization12 Optimistic Concurrency control mechanism12

There's a specialist from your university waiting to help you with that essay.
Tell us what you need to have done now!


order now

OS Security Risks and Mitigation Strategy12 Administrator’s Account13 Group Account Risk13 Directory Rights13 Audit Policy14 User Rights Policy14 Services14 Register editing programs15 Trusted Systems15 HACK Attempts15 Emerging Technologies and Architecture16 Current enterprise strategic requirements that could benefit from the emerging technologies17 Complex Solution Architecture and Delivery17 Quality Assurance17 Skills Enablement17 SOA Services17 Wireless Services17 The potential benefits for adopting these emerging technologies18 Conclusion18

OS Processor and Core Hypotheses The null hypothesis for this study / research is: HO: As a result of the upgrading the multi- processor, multi-core Operating system, there will either be no significant difference in supporting distributed / virtual environment or there will be a significant increase. This is tested against the alternative hypothesis; HA: As a result of the upgrading the multi- processor, multi-core Operating system, there will be a significant decrease in supporting distributed / virtual environment. A summary of current operating system in use

The current operating system in use is the single-processor and single-core Operating System. This operating system supports a single-core processor. The single-core processor has only one execution core i. e. it has single of each of the basic components ALU, Control unit and the Catch memory. The current operating system in use (single-processor and single-core Operating System) allows multiprocessing i. e. allows more than one task to be done at the same time, but it doesn’t support multiple-multi processing i. e. parallel multitasking evident with multi-processor and multi-core operating systems.

Benefits of up grading an Operating system utilizing a multi-processor, multi-core configuration It is of significant importance to upgrade an operating system utilizing a multi-processor, multi-core configuration to support a distributed or virtual environment. Firstly, this allows for the maximum utilization of all of the CPU cores by the operating system, resulting to increased system performance. In addition, while the CPU cores are executing programs, the operating system will be having less stress in resource allocation triggered by the instructions from the various CPU cores.

Also, upgrading the OS will lead to optimization of the multi-processor, multi-core processing power. This will be as a result of the upgraded operating system being able to cope up with the different cores’ requests and instructions during different phases of execution. This will also enable the right operating system (the up graded OS) to provide maximum support to the multi-processor, multi-core (various cores) during executions. Up grading the processor and core To upgrade the processor and core, firstly verify hardware support for the processor(s) to ensure that they are compatible.

Secondly, if need be you will have to upgrade the processor core VRMs. The third step is to upgrade the system BIOS and BMC firmware. The next step after up grading the BIOS and BMC firmware involves upgrading the operating system to add support for multi-core processors. Lastly, install the new processors. Requirements supported for the upgrade processor and core The requirements supported for the upgrade are one; compatibility test must clearly justify that the operating system supports the processor and core being up graded. Also, the hardware part must be in line to support the pgrade. Other components will also be upgraded to balance the process i. e. VRMs, BIOS and the BMC firmware. Conclusion The null hypothesis HO: As a result of the upgrading the multi- processor, multi-core Operating system, there will either be no significant difference in supporting distributed / virtual environment or there will be a significant increase. The null hypothesis is true. Scheduling Algorithms The environment I am currently using is a non-virtual environment. It is a laptop Compaq brand, 3. 1GHz, 3 GB DDR and 300 GB HDD.

It has one operating system installed i. e. Windows XP. The operating system is the single-processor and single-core Operating System. This operating system supports a single-core processor. The single-core processor has only one execution core i. e. it has single of each of the basic components ALU, Control unit and the Catch memory. Thus, on the motherboard there is only one CPU. Based on the environment described above, I intend to use at least two scheduling algorithms namely, First-Come, First-Served Scheduling and Round robin scheduling algorithm.

First-Come, First-Served Scheduling This is the simplest of all operating systems process-scheduling algorithm which maintains a FIFO ready queue. The scheduler picks the process that’s at the head of the queue whenever it needs to run a process. This scheduler is a preemptive type, which means that if at some point in time the process has to block on I/O, it enters the waiting state and the scheduler picks the process from the head of the queue. When I/O is complete and the process is ready to run again, it gets put at the end of the queue.

Benefits and Challenges Presented by FIFO With first-come, first-served scheduling, processes with long CPU bursts usually hold up other processes. In addition, it can reduce the overall throughput since Input / Output processes in the waiting state may complete while the CPU bound process is still running. In this case, there is no optimization of device utilization. For optimization of the resource utilization, it would have been better if the scheduler could briefly run some I/O bound processes that could request some I/O and wait.

Since CPU bound processes don’t get preempted, they reduce the interactive performance, and this is because the interactive process can’t be scheduled until the CPU bound process has completed executing. The advantage of FIFO scheduling is that it is simple to implement. In addition, it is fair, one of the characteristics of a good operating system process-scheduling algorithm i. e. the first one in line gets to run first. The disadvantage of FIFO scheduling is that it is not preemptive. Because of this, it is not suitable for interactive jobs.

In addition, a long-running process will delay all processes that are below it in the queue. FIFO is best used in non-virtual machine environment. This is because non-virtual machines use operating systems such as XP that are not that interactive as compared to operating systems used by virtual machines. From the speed of a computer (GHz) point of view, FIFO can just be used and satisfy the demands of requests without suppressing the resource utilization optimization. Round robin scheduling Round robin scheduling is a preemptive scheduler.

It is the preemptive version of first-come, first-served scheduling. Processes are taken for processing using the first-in-first-out sequence but each process is allowed to run for only a specified period of time. The interval between the period one process is served to the time the next process is served is known as a time-slice or quantum. If a process does not complete or get blocked because of an I/O operation within the time slice, the time slice expires and the process is preempted. Also, if a process gets blocked because of an I/O operation, it is then preempted.

This preempted process is placed at the back of the queue where it must wait for the processes that were already on the list to cycle through the CPU. Benefits and Challenges Presented by Round Robin Scheduling A very long quantum makes the algorithm behave like FFIFO scheduling since it’s very likely that a process with which probably had a block may be waiting to be served before the quantum is up. A small quantum lets the system cycle through processes quickly. This is wonderful for interactive processes. The advantage of Round robin scheduling is that it’s fair in that every process gets an equal share of the CPU.

It is also easy to implement. In addition, if the number of processes in the list is known, then it’s possible to calculate or estimate the worst possible response time for a process. The advantage of Round robin scheduling is that it allocates equal quantum to every process to be served by the CPU, which is not always a good idea. For instance, highly interactive processes will get scheduled no more frequently than CPU-bound processes. Round robin is best used in virtual machine environment. This is because virtual machines usually use operating systems that are very interactive e. . z/VM as compared to the operating systems used by the non-virtual machines. From the speed of a computer (GHz) point of view, Round robin can best be used with and satisfy the demands of requests without suppressing the resource utilization optimization. Conclusion Round robin can be used in both virtual and non-virtual machines. This is because in both cases, the resource optimization in terms of being allowed a quantum to be served by the CPU is optimized without affecting the overall performance of the system.

It will be even faster when used in non-virtual machines (the current environment in use) compared to it being used in virtual machines, this is because non virtual machines are not interactive and also due to the fact that Round robin is a preemptive scheduler. OS Concurrency Mechanism Real-Time, Distributed, and Embedded Environments In real time computing environments, specific time constraints are provided. The Operating system ensures that computations are completed within these constraints. Distributed computing environment enables a computation to use resources located in several computer systems through a network.

Lastly, in embedded computing environment, the computer system is a part of a specific hardware system. The Operating System has to meet the time constraints arising from the nature of the system being controlled. Enterprise distributed real-time and embedded (DRE) publish/subscribe (pub/sub) systems manage resources and data that are vital to users. Publish / Subscribe (Pub/Sub) is a middleware specifically designed as an interface to Distributed Real-time ;amp; Embedded environments and clouding computing.

Cloud computing is where computing resources are provisioned elastically and leased as a service. It is an increasingly popular deployment paradigm. Adapting Distributed Real-time and Embedded Pub/Sub Middleware for Cloud Computing Environment. Our enterprise adapts Distributed Real-time and Embedded Pub/Sub Middleware for Cloud Computing Environment, DRE pub/-sub systems can leverage cloud computing provisioning services to execute needed functionality when on-site computing resources are not available.

Although cloud computing provides flexible on-demand computing and networking resources, enterprise DRE pub/sub systems often cannot accurately characterize their behavior a demand for the variety of resource configurations cloud computing supplies e. g. , CPU and network bandwidth), which makes it hard for DRE systems to leverage conventional cloud computing platforms. Enterprise distributed real-time and embedded (DRE) publish/subscribe (pub/sub) systems manage data and resources that are critical to the ongoing system operations.

Examples include testing and training of experimental aircraft across a large geographic area, air traffic management systems, and disaster recovery operations. These types of enterprise DRE systems must be configured correctly to leverage available resources and respond to the system deployment environment. For example, search and rescue missions in disaster recovery operations need to configure the image resolution used to detect and track survivors depending on the available resources (e. g. , computing power and network bandwidth).

Many enterprise DRE systems are implemented and developed for a specific computing/ networking platform and deployed with the expectation of specific computing and networking resources being available at runtime. This approach simplifies development complexity since system developers need only focus on how the system behaves in one operating environment, thereby eliminating considerations of multiple infrastructure platforms with respect to system quality-of-service (QoS) properties (e. g. , responsiveness of computing platform, latency and reliability of networked data, etc. ). Focusing on only single operating environment, however, decreases the flexibility of the system and makes it hard to integrate into different operating environments, e. g. , porting to new computing and networking hardware. Cloud computing Cloud computing is an increasingly popular infrastructure paradigm where computing and networking resources are provided to a system or application as a service typically for a “pay-as you-go” usage fee. Provisioning services in cloud environments relieve enterprise operators of many tedious tasks associated with managing hardware and software resources used by systems and applications.

Cloud computing also provides enterprise application developers and operators with additional flexibility by virtualizing resources, such as providing virtual machines that can differ from the actual hardware machines used. Several pub/sub middleware platforms (such as the Java Message Service, and Web Services Brokered Notification) can; 1. leverage cloud environments 2. support large-scale data-centric distributed systems 3. Ease development and deployment of these systems. These pub/sub platforms, however, do not support fine-grained and robust QoS that are needed for enterprise DRE systems.

Some large-scale distributed system platforms, such as the Global Information and Network-centric Enterprise Services, require rapid response, reliability, bandwidth guarantees, scalability, and fault-tolerance. Conversely, conventional cloud environments are problematic for enterprise DRE systems since applications within these systems often cannot characterize the utilization of their specific resources (e. g. , CPU speeds and memory) accurately. Consequently, applications in DRE systems may need to adjust to the available resources supplied by the cloud environment (e. g. using compression algorithms optimized for given CPU power and memory) since the presence/absence of these resources affect timeliness and other QoS properties crucial to proper operation. If these adjustments take too long the mission that the DRE system supports could be jeopardized. Configuring an enterprise DRE pub/sub system in a cloud environment Configuring an enterprise DRE pub/sub system in a cloud environment is hard because the DRE system must understand how the computing and networking resources affect end-to-end QoS. For example, transport protocols provide different types of QoS (e. . , reliability and latency) that must be configured in conjunction with the pub/sub middleware. To work properly, however, QoS-enabled pub/sub middleware must understand how these protocols behave with different cloud infrastructures. Likewise, the middleware must be configured with appropriate transport protocols to support the required end-to-end QoS. Manual or ad hoc configuration of the transport and middleware can be tedious, error-prone, and time consuming. Concurrency control mechanisms in handling communications and synchronization

There are three main concurrency control mechanisms in handling communications and synchronization, optimistic, pessimistic and semi-optimistic. Out of the three, the optimistic concurrency control mechanism effectively supports communication and synchronization in the selected enterprise. Optimistic Concurrency control mechanism – Delay the checking of whether a transaction meets the isolation and other integrity rules (e. g. , serializability and recoverability) until its end, without blocking any of its (read, write) operations (“… nd be optimistic about the rules being met… “), and then abort a transaction to prevent the violation, if the desired rules are to be violated upon its commit. An aborted transaction is immediately restarted and re-executed, which incurs an obvious overhead (versus executing it to the end only once). If not too many transactions are aborted, then being optimistic is usually a good strategy. OS Security Risks and Mitigation Strategy As is true with all components in any computing environment, security considerations are paramount for operating systems as well.

Performing risk assessments and identifying mitigation strategies in advance is a way to ensure that security has been addressed at the operating system level. Virtualized operating systems bring a new set of challenges. Our enterprise which adopts Adapting Distributed Real-time and Embedded Pub/Sub Middleware for Cloud Computing Environment is exposed to many risks. Below are the highest OS and the adopted computing environment’s threats and mitigations. Administrator’s Account The OS has two default user accounts (Administrator and Guest).

The Administrator account does not comply with the lockout policy. This is because someone could lock all of the accounts on the system, which would require a re-install of the operating system itself. The Administrator account is vulnerable to a brute force password attack. The mitigation strategy for this risk involves buying and implementing the NT Resource Kit. A program called “passprop” can be installed which forces the Administrator account to be locked out according to the lockout policy as if it was a normal account. Yet the Administrator account can still logon to the console. Group Account Risk

There are several default groups that are automatically installed with the enterprise operating system. Some of these groups are very powerful such as Administrators and Domain Admins. The Administrators members can, for instance, change the password of the Administrator user account. Each of these default groups and their members need to be reviewed to ensure that only proper privileges are granted to each group and that each group has only authorized members. The mitigation strategy involves the review of the recommended group rights. Review membership of key groups to ensure that only authorized members are present.

Directory Rights The enterprise’s default install configuration sets the Everyone group with full control (RWXD) at the root directory level. Since the Everyone group is a default group that each user participates within, every user on the system has full control at the root directory level. Some of the subdirectories (folders) such as the security directory are further protected. But many of the directories are open for an attack from a normal user account. The mitigation strategy would be to work with our particular environment to ensure that we secure as much of the directories as possible unless access is really required.

Note that if you deny access to the Everyone group, the enterprise OS will deny access to all users even the Administrator account. So use deny sparingly. Audit Policy The enterprises OS audit policies are set to no auditing as a default. Ensure that the recommended settings are placed. User Rights Policy User Rights are special rights provided to individual users or groups. User Rights provide global rights for job functions such as Backup authority. Some of the User Rights are not fully defined which in this case the rights should not be granted to anyone.

Mitigation strategy is to ensure and implement recommended settings. Services UNIX and other operating systems have had programs (services) that perform special functions. These functions include programs such as File Transfer Protocol (ftp at socket 21), Web processing (http at socket 80), and Telnet (telnet at socket 23). There are many others (over 6000) that can be installed. Each of these services runs programs that were written by a vendor. These programs could be attacked and compromised. Only authorized and security certified services should be running.

Mitigation strategy to this risk is to review all services (Control Settings, Services) and determine that only authorized and valid services are installed within the enterprise’s environment. Register editing programs Programs like the Regedt32. exe program and the Regedit. exe (sixteen bit version) in window OS allow for direct changes to the NT registry. The NT registry is a complete set of all NT settings and definitions. Everything from user accounts and workstation configuration is stored in the Registry. The Regedt32. exe and the Regedit. exe programs allow for the modification of the Registry values.

The default user rights on these programs are the Everyone group has full control. These means that everyone can execute and change any registry value that they are authorized to change. The mitigation strategy will be to change the permissions on such programs to read and execute (RX) for the Administrator Group only. Trusted Systems Trusted systems within the enterprise environment are difficult to administrate. The Domain groups can be added to with new members without local administrator’s knowledge. The relationships can be become very confusing and may cause too much global access.

The solution would be to review all the trust relationships and determined that they are properly established and administrated. HACK Attempts Many hack attempts have occurred in the past couple of years. HACKs includes; GetAdmin, Red Button, Password Hack, SYN attack etc. The mitigation strategy is to ensure that the operating system is up-to-date with the latest system patches. Emerging Technologies and Architecture The Emerging Technologies and Architecture plays an important role in aligning information technology with our enterprise goals, drive technology strategy and ensure adherence to IT standards and principles.

The key to survival and growth is the ability to adopt in today’s business environment. The definition of the vision, implementation and governance of emerging technology and enterprise architecture, including Emerging Technologies and Architecture such as event-driven architecture, cloud computing, service oriented architecture (SOA), wireless, sensor, etc are the emerging technologies or architectures that have the potential for supporting future requirements. Others Emerging Technologies and Architecture include; * Human Augmentation * Context Delivery Architecture Video Search * Mobile Robots * Augmented Reality * Social Software Suites * Microblogging * Green IT * Video Telepresence * Mesh Networks: Sensor * Online Video * Cheaper solar cells * Data tools * Mobile BI * Health IT / Personal Health Records * HTML 5 * Semantic Web * Ubiquitous computing * Geo-enablement * Smart objects * Visualization tools * SMS / MMS * Virtual Computing lab for state government * Advanced search technologies Current enterprise strategic requirements that could benefit from the emerging technologies Complex Solution Architecture and Delivery

The enterprise needs to successfully design and deliver complex business solutions based on a broad range of technology and industry knowledge, experience and assets. Quality Assurance The enterprise needs to proactively identify problems, reduce project risk and avoid costly rework with a variety of architecture, design and code reviews. Skills Enablement The enterprise needs the development and delivery of education for the IT architect and the lead developer community. SOA Services

Establishment of an SOA strategy, assessment, implementation planning, design, development and integration to help our enterprise realize the benefits of service oriented architecture and Web services. Wireless Services Wireless services will transform our enterprise through edge-of-network technology such as handheld devices, RFID, telematics, smart cards and embedded computers. The potential benefits for adopting these emerging technologies The adaptation of these emerging technologies will enable the enterprise achieve stronger alignment between IT strategy and business goals.

Secondly, this will enhance the implementation of IT standards and governance for greater technology efficiencies. Also, adaptation of these emerging technologies will align the variety of platforms and technologies that have resulted in excessive complexity and cost. In addition, this will improve performance, availability, scalability and management of existing architectures and applications. Furthermore, adaptation of these emerging technologies will support new business processes with new technologies.

Lastly, this will enable the adaptation of reusable assets to drive greater efficiencies and faster time to market. Conclusion The goal of adopting the Emerging Technology and Architecture is to leverage industry best practices in application lifecycle management while focusing on strategic, core business competencies. Also, to assemble the industry and technology expertise through proven methodology, research innovation, and solution assets and accelerators that will enable us achieve greater productivity and business agility.

Leave a Reply

Your email address will not be published. Required fields are marked *