Wednesday, July 31, 2019

Economic, Social And Political Economy Essay

Introduction In 1920, the independent kingdom of Korea was forcibly annexed to Japan, it lasted until the end of the second World War. After World War II, the United States (US) decided to occupy the southern half of Korea to prevent the Union of Soviet Socialist Republics (USSR) from taking control of the entire peninsula. USSR was helping North Korea’s fight against the Japanese forces. The US divided Korea at the 38th parallel to keep Seoul within the American-occupied area. USSR did not oppose the division. (Korean War, 2006) Both the USSR and the United States started to organize the governments in their half of Korea. When they did so, the political factions that used to be united against Japan started to emerge again. These factions represent the left-wing and right-wing views. The left-wing wanted an overhaul of Korea’s land ownership laws, which unduly favored rich landowners. And the right wing vehemently refused the reform. (Korean War, 2006) From 1945 to1948, the US suppressed the widespread leftist movement and backed Syngman Rhee. Rhee had lived for decades in the United States and has a solid anti-Communist credentials. He was also popular with the right. (Korean War, 2006) The USSR, on the other hand, supported the left and Kim Il Sung. Kim II Sung received support from North Koreans and China. Kim fought with Chinese Communist forces against the Japanese in Manchuria in the 1930s. Kim forced a radical redistribution of land when he first came into power. By the end of 1946 the regimes of both North and South Korea were in place. The division of Korea was formalized until 1948. The South established the Republic of Korea while the North established the People’s Republic of Korea.   (Korean War, 2006) The regime was barely in placed South Korea when it had to contend with a rebellion in the south from the left-wing, particularly in its southernmost province. North Korea supported the rebellion. It was South Korea that first provoked North Korea into a war, but and Kim II Sung was willing to fight the war, too, with the help of USSR and China. In 1949, fighting in the 38th parallel broke out between the North and the South. In 1950, the army of North Korea crossed the dividing line. The Korean War reached its height from 1950 to 1953. In 1953, a cease fire agreement was signed. It ended the fighting but the Korean peninsula remained divided. (Korean War, 2006) The Korean War was considered as one of the most destructive of the 20th century. There was an estimated death of 2.4 to 4 million Korean, mostly civilians. The other countries who supporter either side also experienced casualties. China, who supported the North, lost almost 1 million soldiers. The US, who sided with the South, lost a little more than 36 thousands. The economic and social damage to the Korea Peninsula was incalculable. In the North Korea, the three years of bombing destroyed most of the modern buildings. (Korean War, 2006) Because of the Korean War, the US and Japanese economy received a much needed boost after World War II.   Japan became the source of materials for the war. Meanwhile, defense spending in the US nearly quadrupled in the last half of 1950. (Korean War, 2006) The North Korean Economy after the War   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Because North Korea endured 3 years of US bombing, a new capital had to be rebuilt after the war in North Korea. By 1960, the discipline and forced-labor policies of the Kim II Sung’s regime resulted in recovery and development. The general standard of living of the people remained low.   There was an emphasis to heavy industrial growth but not production of economic goods. (North Korea, 2006) In 1995, there was a nationwide food crisis. In 1996, it became a widespread famine. USSR and China withdrawal of its food subsidies, the government’s agricultural policies, and a series of floods and droughts are factors that contributed to the food crisis. International humanitarian relief agencies provided food aid and other relief efforts. In 1998, an estimated 1 million people had died of starvation and famine-related illnesses. The food crisis continued into the early 2000s. (North Korea, 2006)   The Juche Idea Juche is the official state ideology of North Korea. It is also the basis for its political system. Juche literally means â€Å"main body† or â€Å"subject†. In North Korean sources, it had been translated as â€Å"independent stand† and the â€Å"spirit of self-reliance†. The core principle of the Juche ideology has been that â€Å"man is the master of everything and decides everything†. (Juche, 2006, para. 1) It was Kim Il-sung which advanced Juche as a slogan in speech titled â€Å"On Eliminating Dogmatism and Formalism and Establishing Juche in Ideological Work†. It was made in rejection of the policy of de-Stalinization in the Soviet Union. It became a systematic ideological doctrine in the 1960s. Kim Il-sung outlined the three fundamental principles which are as follows: (1) independence in politics, (2) self-sustenance in the economy, and (3) self-defense in national defense. (Juche, 2006, para. 2) In 1982, Kim Jong-il authored a document titled â€Å"On the Juche Idea†. An article in Wikipedia said: According to Kim Jong-il’s On the Juche Idea, the application of Juche in state policy entails the following: 1) The people must have independence (chajusong) in thought and politics, economic self-sufficiency, and self-reliance in defense; 2) Policy must reflect the will and aspirations of the masses and employ them fully in revolution and construction; 3) Methods of revolution and construction must be suitable to the situation of the country; and 4) The most important work of revolution and construction is molding people ideologically as communists and mobilizing them to constructive action. (Juche, 2006, para. 3) One of the first application of the Juche idea in North Korea was the Five-Year Plan known as the Chollima Movement. The Five-Year Plan involved rapid economic development, with a focus on heavy industry. This is to ensure independence from the USSR and China. (Juche, 2006, para. 4) But the reality of the Juche Idea is its economic program of â€Å"self-reliance† has resulted in economic dependence. North Korea has been an aid-dependent regime. From 1953 to1976 it depended considerably on Soviet industrial aid. The USSR remained North Korea’s greatest economic benefactor until its 1991 collapse. It experienced a food crisis in the early part of its regime which later developed in to a famine. It has accepted aid from China, South Korea and the international community. In 2005, the country was the second largest recipient of international food aid. In 1998, Juche made pragmatic adaptations to capitalism. (Juche, 2006, para. 5) The state ideology has been an alternative to traditional religion. Juche have incorporated religious ideas into the state ideology. Juche is considered the largest political religion in North Korea. Practice of all other religions is overseen and subject to heavy surveillance by the state. (Juche, 2006, para. 6) Improving Relationship with the South After the Korean War, North Korea developed a hard stance against the South. In the 1960’s, an assassination team nearly succeeded in killing Park Chung Hee, the South Korean president at that time. In 1968, North Korean gun boats seized a US intelligence gathering vessel and subjected its crew to extreme circumstances for a year. In 1969, a US reconnaissance plane was shot down. There were guerrilla raids launched against the South. These attacks made the South even more dedicated in renewing their defense measures and influenced the formation of a harder political order in South Korea. (North Korea, 2006) Through the 1970s and 1980’s, there were efforts to affect the unification of the North and South Korea, but these efforts failed. In June 2000, the leaders of North and South Korea, agreed to promote reconciliation and economic cooperation between the two countries. This was the first face-to-face meeting between the leaders of the two countries since the country was divided. (North Korea, 2006) The meeting of the leaders of these two countries led to the first cross-border visits of family members separated since the Korean War which was officially authorized by both states. The agreement also led to many favorable consequences for both countries. Trade and investment increase. There was a more relaxed military tension. It also partially reopened road and rail links severed by the Korean War. There was also a start of mail service between the two countries. (North Korea, 2006) During the opening ceremonies of the 2000 Summer Olympic Games in Sydney, Australia, the athletes from North Korea and South Korea paraded together under one flag, the neutral flag of the Korea Peninsula. But the athletes still competed separately in the different events. (North Korea, 2006) In October of 2000, Kim Dae Jung was awarded the Nobel Peace Prize for his efforts to bring about reconciliation between the two countries. (North Korea, 2006) South Korea, together with China, is instrumental in bringing almost 1 billion dollars in aid and investment to North Korea. South Korea’s help prevent the collapse of the North Korean economy (Fajola & Fan, 2006). However, recent political developments may trigger old hostilities in the region. The insistence of North Korea to develop and test nuclear weapons may bring war in Northeast Asia again (Fajola & Fan, 2006). South Korea, despite its own pressing need, offered to supply North Korea energy if it would cease the production of nuclear weapons (David 2006). There is still no news whether North Korea has accepted its offer. Politics and International Relations Before the Korean War, The Workers’ Party of Korea was established. Kim Il Sung emerged as the leader of North Korea. He enjoyed the military support of the USSR until the soviet troops withdrew in 1948. Under the Workers’ Party leadership, political and economic changes had been made. The egalitarian land reforms were enforced. There was a radical redistribution of land from the land owners to laborer and tenant farmers. The landless labor and tenant farmers supported these reforms. Because of these reforms, there was massive confiscation of land and wealth from the Japanese or to enemies of the regime. Aside from the reforms, there was also party-directed economic planning and development. (North Korea 2006) Kim II Sung fought against the Japanese and, in 1949, welcome the war against South Korea. When North Korean forces crossed the dividing line to the South, the US joined the fighting with the approval of the UN. There was also a small contingent from Great Britain, Canada, Australia, and Turkey to help with the Americans. USSR, an ally of North Korea refused to vote during the deliberations in the UN. In October 1950, China supported North Korea in the War. When cease fire was finally agreed upon, thousands of lives where lost on both sides. Millions worth of infrastructure were also destroyed, particularly in the North which experienced massive bombing operations from the US. (North Korea 2006). In the political front, North Korean leadership began to veer away from USSR influenced. The intensifying conflict between China and the USSR, allowed North Korea even more independent action. (North Korea, 2006) North Korea actions after the Korean War seemed to be geared towards building of nuclear might. When both North and South Korea joined the UN in 1991, they signed agreements regarding nuclear and conventional arms control and reconciliation. In 1992, North Korea signed an agreement allowing the International Atomic Energy Agency (IAEA) to inspect the country’s nuclear facilities. In 1993 the North Korean government refused the inspection of nuclear waste sites which is believed to contain undeclared nuclear material for nuclear weapons. This resistance continued until the first half of 1994. (North Korea, 2006) South Korea suspended its formal acceptance of the 1968 Treaty on the Non-Proliferation of Nuclear Weapons (NPT) which it signed in 1985. In 1993, the U.S. Central Intelligence Agency (CIA) suspected North Korea of building at least one atomic weapon from plutonium extracted from fuel rods at a nuclear power plant. (North Korea, 2006) In 1994, the US and North Korea reached an agreement called Agreed Framework. In this agreement, North Korea will suspend the operation of designated nuclear facilities capable of producing and reprocessing weapons-grade plutonium and allow IAEA inspectors to verify the suspension. The agreement called for annual deliveries of heavy fuel oil to North Korea. The U.S. agreed to take steps to end economic sanctions against North Korea, sanctions in placed since the Korean War. (North Korea, 2006) The 1994 Agreed Framework is also a step towards normal diplomatic relations between the US and North Korea. North Korea agreed to suspend operation of the nuclear facilities in return for two new reactors that will be built by US, South Korea and Japan. In 1995, the construction of the two reactors started. In 2002, US abrogated the agreement. It charged North Korea of violating the agreement by initiating a secret weapons-grade uranium-enrichment program. North Korea denied that it had such a program. Because the US abrogated in 2002, North Korea resumed plutonium production. In February 2005, it issued a statement that it was now a â€Å"nuclear weapons state.† (North Korea, 2006) While relations between the two Koreas are improving, the relations between the US and North Korea became even more strained because of the issue of nuclear weapons. The US had placed North Korea on a list of countries supporting terrorism and had characterized North Korea as being part of an â€Å"axis of evil†. China attempted to act as a mediator between North Korea and the US, but the US refused to meet in one-on-one negotiations. To compromise, China fashioned a series of negotiations which would take place among China, Japan, Russia, North Korea, South Korea, and the US. The talks were held in Beijing, China. (North Korea, 2006) Without reaching an agreement, the six-party talks recessed in early August 2005. When the talks resumed in September 2005, North Korea pledged to abandon all nuclear weapons and programs in exchange for economic aid and security guarantees. The talks stalled. Early of July 2006 North Korea launched seven test missiles, including a long-range Taepodong-2 missile, which fell into the Sea of Japan. Even if the test were considered successful, these raised tensions in the area. Concerned international community, through the UN Security Council, called for economic sanctions against North Korea. (North Korea, 2006) The 2006 Nuclear Testing Analysts are saying that North Korea’s gaining bragging rights as a nuclear power may have political and economic fallout. Many fears that the nuclear tests being done by North Korea can trigger instability in Northeast Asia. China, which had been a supporter of North Korea, is reconsidering its support for the Kim Jong II. China, with the help of South Korea, had given billions of dollars in aid and investment to North Korea. Both countries helped prevent the collapse of the economy for fear that such will send refugees pouring into their own borders. An Asia Times Online writer said that South Korea offered to supply North Korea’s energy needs if the latter will abandon its nuclear arms. China’s foreign minister, Li Zhaoxing expressed the Chinese government opposition to the nuclear test. (Fajola and Fan, 2006) Because of the tests, South Korea stopped the delivery of emergency assistance to help the North deal with recent floods. President Roh Moo Hyun said, â€Å"The South Korean government at this point cannot continue to say that this engagement policy [sunshine policy] is effective. Ultimately, it is not something we should give up on, but objectively speaking, the situation has changed. Being patient and accepting whatever North Korea does is no longer acceptable,† (qtd. in Fajola and Fan, 2006, para. 7). Analysts say that the shift in position of China or South Korea is partly based on the possible reaction of Japan, the nation most threatened by North Korea’s ballistic missiles. A nuclear-armed North Korea could lead Japan arm itself more aggressively. A U.S. congressional report may lead Japan, South Korea and Taiwan to develop their own nuclear weapons. It would start an arms race in the region and feed regional disputes. (Fajola and Fan, 2006) Japan has already said that it would impose harder measures against North Korea. The measures can include a ban on the remittances sent home by North Koreans working in Japan. (Fajola and Fan, 2006) Another motivation for China’s position is its failed attempt to mediate between US and North Korea in the series of negotiations in Beijing. To save face and to meet international pressure, China may impose tougher economic sanctions and reduce aid to North Korea to force the latter to stop production and testing of its missiles. (Fajola and Fan, 2006) Seung Joo Baek, an analyst from the Seoul-based Korea Institute for Defense Analyses also said: North Korea’s message is that no matter how hard South Korea, Japan, the United States gang up on them, they won’t budge. They want to be recognized as a nuclear power. They are assuming that it is the only thing that will keep them safe. We will have to wait and see if they are right. â€Å"(qtd. in Fajola & Fan, 2006, conclusion)       References    Korean War. (2006). In Encyclopà ¦dia Britannica. Retrieved November  30,  2006, from Encyclopà ¦dia Britannica Online: http://www.britannica.com/eb/article-9046072. Korean War (2006). In Microsoft ® Encarta ® Online Encyclopedia 2006. Retrieved November  30,  2006 from http://encarta.msn.com. North Korea (2006). In Microsoft ® Encarta ® Online Encyclopedia 2006. Retrieved November  30,  2006 from http://encarta.msn.com. Nguyen, D. â€Å"South Korea Enters the Great Game.† May 13, 2006.   Retrieved November  30,  2006 from http://www.atimes.com/atimes/Korea/HJ10Dg02.html. Juche (2006). In Wikipedia 2006. Retrieved November  30,  2006 from http://en.wikipedia.org/wiki/Juche Fajola, A. & Fan M. â€Å"North Korea’s Political and Economic Gamble.† October 10, 2006. Retrieved November  30,  2006 from http://www.washingtonpost.com/wp-dyn/content/article/2006/10/08/AR2006100801169_2.html   

Tuesday, July 30, 2019

Implementation of an Information System for a Financial Institution

INTRODUCTION Background Ribeiro and David (2001) state that information technology over the years has had a number of significant impact on organizations. Such impacts are: †¢ It has created opportunities for competitive advantages amongst competitors in any industry †¢ It has improved the relationship existing between customers and organizations †¢ It has helped with the development of new products as well as services †¢ It has allowed organizations to perform tasks which would have remained impossible without the use of a computer system. It has reduced the total cost incurred in transaction processing for banks and other financial institutions. History of the I. T manager Having applied for the post of the I. T systems manager, listed below are my qualifications and information about my past work experience: †¢ A master’s degree in Information systems with technical background in Windows Server and Desktop technology †¢ Professional qualificati ons in CISCO and MCSE with an understanding of some server grade applications including IIS, Apache, SharePoint, DNS, SQL and Foundstone Reasonable knowledge of large enterprise LAN/WAN environments †¢ 8 years experience in project management, leadership and organizational skills †¢ 7 years IT managerial experience in other financial institutions †¢ 5 years experience with client technologies †¢ 4 years working experience developing effective IT for financial institutions History of Progress bank Progress bank was established in 1999 and up till now does not have any I. T department in place. Their customer base is relatively small as well as the number of staff. Recently, it mergered with another bank in a bid to make it stronger and this has led to increased operational activities, increased customer base and more staff and a decision to introduce and implement an effective I. T department for the bank. The board decided to set up the department in order to ease their work, to allow free flow of communication between the various departments, to hasten decision making processes, to improve the turn-around time in their daily operational activities. The board members of the bank want the I. T manager to report directly to the Head of Operations of the bank. About 10 – 12 information technology specialists would be required to work with him in the new department and the roles and responsibilities are to develop, maintain and support the banks cash and commercial product management systems, to reconcile accounts, to manage the cash systems and control disbursement of funds. Some recruitment consultants were consulted to help out with the recruitment of appropriate candidates and so far, 8 people have been recruited to work with the I. T manager. These people include network technicians, network assistants, and network engineers. Responsibilities of the I. T manager The major tasks and responsibilities of the I. T manager have been identified to be: 1. Development of an information system for the bank. 2. Coordination, monitoring and supervision of the supporting staff for the development, designing, coding, maintaining and modifying application programs for a limited area and small number of projects. 3. Extensively working with the business units of the bank in support of their business processes, electronic business communication and also transactional needs. . The provision of analytical support for applications-related activities including customer experience, marketing, technology, human resource and also the operations department. 5. Leading the deployment of advanced information technology solutions relating to commercial product needs. 6. Recommendation and suggestion of strategies as well as hardware and software enhancements to increase employee productivities. 7. Administering, recommending and im plementing changes to policies that affect the employees of the various departments. 8. Making the flow of information within the organization easier and faster through the development of the intranet. 9. Making communication a two- way thing i. e. vertical and horizontal. 10. Developing a customer database for the bank. 11. Selecting, developing and evaluating personnel to ensure the efficient and effective operation of assigned functions. 12. Ensuring that the project budget, schedules and performance requirements are completely met. 13. Regular interaction with customers and some peer group managers. 14. Ensuring that the organization operates fully in accordance with the established procedures and practices. How to measure the implementation success The success of the implementation of an information system can be measured by taking note of the following: i. User satisfaction with the system or with the outcomes of using the system. ii. Favorable attitudes on the part of the users towards the system. iii. The overall payoff to the organization. iv. The extent to which the system accomplishes the organizational objectives. Limitations The major limitation to be considered is the cost. It would be costly for the organization to fully introduce and to implement an effective information system. A major factor to be considered is the need for training and development of the existing members of staff of the organization. Training and development sessions, on-the-job training, meetings and discussions need to be arranged for the members of staff in a way that it would not interfere with the day-to-day operational activities. The whole process of change may be a cumbersome one for the employees because they would have to transfer the information and data from the various departments from files which were their major form of storage to the computers. INFORMATION SYSTEM This refers to the interaction between people, processes, data and technology. It refers to the way people interact with technology to support the business processes. Information systems are different from the information and communication technology (ICT) and also from business processes although it has an ICT component and it also helps to control the performance of the business processes (Zhu and Meredith, 1995). An information system can be defined as a work system involving activities that are necessary for the processing (capturing, transmitting, storing, retrieving, manipulating and displaying) information (Wang and Strong, 1996). An information system can be considered as a semi-formal language that supports decision making as well as actions. Components of Information System There are different components of the information system and they include: i. I. T comprising of the hardware and the software. ii. Data/ Information. iii. Procedures/Policies. iv. People. v. Purpose and vi. Communication Networks. Hardware Standards There are different standards of computer hardware, the hardware consists of the things that can be seen. The standards will be reviewed and revised occasionally based on the emerging desktop technologies and development in software (Avgerou, 2001). It is recognized that in the bank, the ability to share vital information easily and quickly is very important. Apart from the quick sharing of information, the software environment is also important especially those used for word processing, databases for the customers, spreadsheets, network browsing and electronic mail. Therefore, the development of a wide computing infrastructure which is based on good hardware and software standards will improve the day-to-day operational activities and interactivity between the various departments of the bank. The standards would also help facilitate the quick exchange of information as well as important documents both within and outside the bank. According to Stair and Reynolds, the hardware standards are based on the present technology that is available in addition to the present needs of the bank which then applies to both the windows and the mackintosh platforms. However, for each hardware configuration, some considerations have to be made which include: i. Easy connectivity to the bank’s network. ii. Easy connectivity to the external systems and other organizations. iii. The in-house experience with the chosen product and the configuration. v. The maximum period which the machine can effectively function. v. The presence of service provided by external hardware repairers. Different types of computers can be purchased but regardless of the type purchased, the minimum configuration should be: i. Intel core 2 Duo processor. ii. 2 G RAM because we’ll be using windows Vista. iii. 60 GB hard disk. iv. CD – ROM/DVD drive. v. Network connection. vi. 3 years warranty. Because of the nature of the tasks performed by the bank, there will be a need to archive data, so a DVD+R drive is recommended. Recommendations on what to purchase Monitors: Flat panel monitors with high resolutions are recommended due to cost constraint, but as time goes on, we could change them to dual monitors. Printers: HP Laser jet P2015dn (monochrome) and HP Color Laser Jet 2605dn (color) is recommended. Scanners: USB scanners are recommended. Other Peripherals such as Modems, NICs and Drives: From previous experience, a personal relationship has been developed with MNJ Technologies Direct so it is recommended that supplies be purchased from them. Software Standards The software standards have a lot of advantages and these include: 1. An improved data sharing to ensure: . The sharing of data between applications such as word processors, databases, spreadsheets and so on. b. That there are identical resources on each of the desktop to provide easy transfer of information and to serve as a consistent tool-set for all the bank workers. c. A consistency of file format to provide optimal file sharing between individuals, units and departments within the organization. 2. An improved training which focuses on: a. Team training in various courses and workshops for different levels of user proficiency i. e. the introductory, intermediate as well as the advanced stages. b. Computer – based training courses which is centered on selected software packages. 3. An improved support from the I. T support staff to focus on: a. The depth of knowledge of application instead of the breadth of the large number of applicants. b. Product expertise. 4. Smoother software installation and upgrades to ensure: a. The proper installation of the different software for the new computers usually making it a part of the initial hardware installation. b. That routine installation is made instead of a specialized process for each individual resulting in maximization of time and resources. c. That upgrades are tested and properly documented in order to reduce potential incompatibilities. Types of software standards 1. Fully supported software: it is my responsibility as the I. T manager to ensure that the appropriate software is installed, to troubleshoot software problems, to provide training courses and to provide the documentation of selected packages. Office productivity suite includes Microsoft word, Microsoft excel, PowerPoint, Access, and MS Office. i. Electronic mail/calendar ii. Web browser which includes internet explorer 6, safari 1 (Mac OS 10. 2), Firefox 2. iii. Web course development : Desire2Learn. v. Web page development: Dreamweaver MX. v. Image Editing: Adobe Creative Suite 2. 0. vi. Operating Systems: Windows XP and Windows Vista. vii. File transfer: Transmit, Filezilla 1. 7. viii. Other utilities: PowerArchiver 2000, Norton Antivirus 10. 15, Print Key 2000. ix. Network operating system: Netware 6, Microsoft server. x. Network clients: Netware client 4. 9 SP2 (Win XP). 2. Partially supported software: This may include some versions of the fully supported software and in some cases; it includes a new release of a standard application. These includes: i. Mathematical software: Maple 10, Matlab 2006. i. Telnet: Host Explorer (Telnet) 4, Putty (Win). iii. Operating systems: Windows 2000, Mac OS X 10. 3. iv. Statistics: SPSS 15. x, SPSS 12. x, Minitab 15. x. v. Office productivity suite: MS Office XP. 3. Non- supported software: these are some software that the I. T will not install nor provide follow-up support for because they are considered as obsolete. Examples of these include all Microsoft DOS and Windows 3. 1 based software. 4. However, changes will be made regularly by the I. T department to the computer hardware and software standards and this will be communicated to all members of staff. Sufficient time will be allowed for the migration to new standards; changes will also be made regularly to the hardware configurations as technology and prices change and would also be communicated to all members of staff. Operating systems Operating systems are the most important software which runs on the computer. Without it, the application software which is designed to communicate with the hardware through the operating system cannot run. There are different types of operating systems and these can be classified into: a. Single program operating system and . Multi tasking operating system The single program operating system is a type of OS that allows only one program to run at a particular time. This was later converted to the multi tasking operating system because it was found out that it was time consuming and not very practical and professional to close one application in order to open another one especially if you want to copy or transfer data from one application to anothe r. The multi tasking operating system is a type of OS that enables a single user to have more than one applications open at the same time. It usually gives the computer the option of determining how many time slices will be allocated to each program. The main program gets the most and the rest is distributed to the remaining programs depending on their rates of activity. There are basically three types of the multi tasking operating systems. These are: Single user multi tasking systems, real time operating systems and the multi user operating system. The real time operating systems are usually used to control scientific instruments, industrial systems and so on. The user has little control over the activities performed by this type of system. The single user multi tasking system allows a single user to open and run different applications at the same time. Examples of this type are Windows of Microsoft and the Macintosh of apple. Multi user operating systems are systems that give access to the resources on a single computer to many users at the same time. An example of this is the UNIX. However, the operating system that is commonly used are Windows 95, Windows 98, Windows Me, Windows NT, Windows 2000, Windows XP (coming in two versions as the home and the professional), Windows Vista, Windows CE, Apple Macintosh, Unix, Solex etc (Charette, 2005). Network Security The computer networks can either be public or private. They are used daily to conduct transactions and to hasten communications amongst individuals, business and groups within an organization. The networks comprises of ‘nodes’ which can be referred to as ‘client’ terminals and one or more ‘servers’ or ‘host’ computers. They are usually linked by communication systems which could be private which could be used within a company and public which can be accessed by members of the public such as the Internet. However, due to technological advancement, most of the companies host computers can be accessed by employees within the offices over a private communications network outside the offices through normal telephone lines (Tatnall et al. , 2002). Network security can then be described as involving all the activities that the various organizations, institutions, enterprises and so on take to protect the value of their assets and the integrity and continuity of their operations. In order to make the network secure, threats should be identified and strategies put in place to combat them by making use of the different network security tools. Threats to network security There are different threats to network security and they include: a. Viruses: these are computer programs that have been written by programmers with the aim of infecting computers when triggered by a certain event. b. Trojan horse programs: these are delivery vehicles for destructive codes which could appear as a harmless software program or as a useful one. c. Vandals: these are some software applications that can destroy the computer. . Attacks: This could be information-gathering activities which collect data that is used to compromise networks, access attacks which exploit network vulnerabilities gaining entry to e-mails, databases and the corporate network and denial-of-service attacks which prevent access to some part or all parts of the computer system. e. Data interception: This involves the altering of data packets that are being transmitted. Some network security tools that can be put in place include: a. Antivirus software packages: These are used to counter most of the virus threats. They need to be updated regularly in order for it to be effective. b. Secure network infrastructure: firewalls and intrusion detection systems provide protection for all the areas of network hence enabling secure connections. c. Virtual private networks: these are used to provide accessibility control and data encryption between different computers on a particular network. It allows the safe connection of workers to the network without the risk of someone else intercepting the data. d. Encryption: these are used to make sure that messages cannot be read by anyone else other than the authorized recipients. . Identify services: are services that identify users and control their activities as well as their various transactions on the network. Services used here include authentication keys, passwords etc. However, no single solution can protect against the variety of the afore mentioned threats, as a result, multiple layers of the security tools should be put in place. Network security i s usually accomplished through the hardware as well as the software, with constant update of the software to further protect from the emerging threats. In order for the network security system to be effective, it is important to note that all the network security tools work hand in hand to minimize maintenance and to improve security. Client Server Computing Client server computing can be defined as a distributed computing model where the requesting of services from the server processes is done by client applications. Here, both the clients and the servers run of different computers that have been interconnected by a computer network. Basically, it is server software that accepts requests for data from the client software and returns the results to the client. The major focus in client-server computing is on the software. A common example of the client-server computing is the use of the internet which could be the collection of information from the World Wide Web. However, client server computing generally applies to systems in which the organization runs various programs that have multiple components distributed amongst different computers in a particular network. The concept is linked with the enterprise computing which ensures availability of the computing resources. Client server systems are important and useful in the banks because it allows easy accessibility of account information on a central database server amongst other things. This will be very useful in the day-to-day operational activities. All the access is done through a PC client which provides a graphical user interface (GUI). Data such as the individual account numbers can be entered into the GUI along with the different types of transactions made on the account be it withdrawal or deposits. The PC client validates the data, transfers it to the data base server and eventually displays the results. Client Server Toolkits It has been observed that a lot of software toolkits for the effective building of client-server software are available today. These toolkits are referred to as middleware and examples are the Open Software Foundation (OSF) Distributed Computing Environment (DCE), Distributed component object model (DCOM), Message-Oriented Middleware (MOM) and the Transaction processing monitors (TPM). Data Base Management System This is a collection of programs that enables effective storage, modification and extraction of information from a database. Its primary goal is to provide an environment that is convenient and efficient for the storage and the retrieval of information. Different types exist which range from small systems running on personal computers to huge systems running on mainframe computers. Examples of database management system are Microsoft Access, My SQL Server, Oracle and FileMaker Pro. Examples of the use of the database systems include: †¢ Automated teller machines(ATM) †¢ Computerized library systems †¢ Computerized parts inventory systems †¢ Flight reservation systems Employee information systems †¢ Company payroll †¢ Credit card processing systems †¢ Sales tracking systems and so on The internal organization determines the ease and flexibility of information extraction. The requests for information from a database are made in form of a question. This information can be presented in different formats. The database management system includes a report writer program which en ables the output of data in the form of a report; some also include a graphics component which allows the output of information in the form of graphs and charts. The major purpose of a database system is that it provides users with an abstract view of data. Data is usually stored in complex data structures bit users see a simplified view of the data. Model View Controller Model view controller is a design pattern that is used by applications which need the ability to maintain multiple views of data. It focuses on a separation of objects into three categories such as: †¢ Models: for the maintenance of data †¢ Views : for the display of all or a portion of data †¢ Controllers: for the handling of events affecting both the models and the views. Due to the fact that it can be categorized, there can be interactions between multiple views and controllers with the same model and there can also be interactions between new views and controllers that were never in existence with a model without necessarily forcing a change in the design of the model. Model view controller can be represented graphically as shown below: [pic] The controller can change a model or a view or change both due to certain events. When a controller changes the model, all the dependent views update automatically and similarly, when a controller changes a view, the view gets data from the model to update itself. Enterprise resource planning (ERP) This is a term that is used to describe the set of activities supported by multi-module application software that helps businesses and companies to manage the important parts of its business. It is a package that promotes the seamless flow of information in any organization. The information from the ERP system provides visibility for key performance indicators that are necessary for meeting corporate and business objectives. ERP software applications are useful in managing product planning, providing customer service, purchasing, inventories and tracking orders. Enterprise resource planning includes application modules for the finance and the human resources aspects of any business. Typically, it has both modular hardware and software units that communicate on a local area network. This allows a business to add or to reconstruct modules while preserving the integrity of the data. Some of the players in the ERP market are SAP, PeopleSoft etc, while the new comers include Oracle, IBM and the Microsoft. Before an organization implements the ERP, certain issues need to be addressed and they are stated below: †¢ The popular information systems †¢ Fluctuations in the choice of technology The ability of the market players to stay in tune with ERP †¢ The effective ways to implement business applications like ERP †¢ Ways to benefit from it in order to lead to competitive advantage †¢ The necessity for the innovation of software applications All these are important to take note of and will eventually determine the business mod el of the organization. The implementation of ERP is a very crucial factor in the ERP system. The success of a good ERP implementation lies in quicker processes making training very important. The speed and extent of the training eventually determines the worth and the value of the ERP. Decision Support System This is a term that describes computer applications which enhances the user’s abilities to make decisions. It describes a system that is designed to help decision makers identify problems and to make decisions to solve those problems by using information from a combination of raw data, personal knowledge, business models and communications technology (Hanna et. al, 2003). Information that can be gathered and presented by a decision support is: †¢ Comparative sales figures from one period to the other †¢ Projected revenue figures which are usually based on assumptions on new product sales A stock of all the current information assets which could be data sources, data warehouses, data marts etc. Components of Decision Support System According to Bhargava et. al,(1999), the components of Decision Support System can be classified as: †¢ Inputs: which include numbers and characteristics that are used for analysis †¢ User knowledge and exp ertise: which are inputs that require manual analysis by the users †¢ Outputs: which are transformed data that aid the generation of the DSS decisions †¢ Decisions: these are the results generated by the DSS Applications of Decision Support System Decision support system can be used and applied in various fields. Some of them are stated as follows: 1. It can be used for medical diagnosis in the clinics. 2. It is used extensively in business and management to allow faster decision making, better allocation and utilization of resources and the early identification of negative trends which could pose as threats to the organization. 3. It is used in agricultural production systems to facilitate decision making at the farms and at policy levels. 4. It can also be used in forest management for long-term planning. 5. It can be designed to make useful decisions in the stock market or even in the marketing department of the banks to decide which segment or target group to design a product for. It is basically useful in any field where effective organization is necessary. Benefits of Decision Support System Some of the benefits of an effective DSS include: 1. It helps to create competitive advantage amongst an organization’s competitors. 2. It facilitates interpersonal relationships between the employees of a particular organization. 3. It increases control in an organization. 4. It speeds up the process of problem-solving in an organization. . It recognizes the importance of training and development within an organization and it promotes this. 6. It encourages innovative thinking as well as discovery of new areas of the decision maker hence improving motivation of the employee CONCLUSION It is worthy to note that the implementation of an effective information system is a continuous process t hat starts from the period the original suggestion was made and continues in the system as new users are introduced. Implementation plays a major role in the management of information technology and as such steps should be taken that it is done properly. REFERENCES Avgerou, C. , (2001). The significance of context in information systems and organizational change. Information systems Journal, Vol 11, pp 43 – 63. Bhargava, H. K. , Sridhar, S. & Herrick, C. (1999). Beyond spreadsheets: Tools for building decision support systems. IEEE Computer, 32(3), 31-39. Charette, R. N. (September 2005). Why software Fails. IEEE Spectrum. Hanna, M. M. , Ahuja, R. K. & Winston, W. L. (2003). Developing spreadsheet-based decision support systems using VBA for Excel. Gainesville, USA: Innovation Center. Ribeiro, L. & David, G. (2001). Impact of the information system on the pedagogical process. Stair, R. M. & Reynolds, G. W. (1999). Principles of information systems (4th ed. ). USA, Course Technology – ITP. Tatnall, A. , Davey, B. , Burgess, S. , Davison, A. & Wenn, A. (2002). Management information systems -concepts, issues, tools and applications. Melbourne: Data Publishing. Wang, Y. and Strong, D. M. (1996). Beyond Accuracy: What data quality means to data consumers. Journal of Management Information Systems, 12, pp. 5-34. Zhu, Z. and Meredith, P. H. (1995). Defining critical elements in JIT implementation: a survey. Industrial Management and Data Systems, 95(8), pp. 21-29.

Monday, July 29, 2019

Power and Places Type

We are going to talk about power and places type: -> The power is necessarily an opposition between ruler and a ruled. There are many forms and places of power: Among the places of power, there are the White House, the Buckingham Palace, the Pentagon†¦ ? These places are powerful because they have political and military influence. Government is a governor who exercises his power through institutions such as the courts or prisons In this case, the governed are citizens who obey the laws of the government.But sometimes there is abuse of power,and we see resistances to power, as when there was apartheid in South Africa. Indeed, there was a racial discrimination between blacks and whites. The great figure of the resistance in South Africa was Nelson Mandela: he fought all his life against discrimination against black, he was imprisoned for several years. ==========================================================================Among the forms of power, there is, for example, the pow er of the media.Indeed, the media play a very important role within today's society. Media consists of books, television, movies, music, internet, radio, magazines, newspapers etc * Media have positive aspects: we can stay informed about political, social events very easily, you can have fun watching movies and listening to the music. (Through newspapers, TV, radio †¦) * But media have also bad/negative aspects. Indeed, today, it has become a great weapon to influence the opinions of individuals.For example, electronic media has the power to manipulate information, they can deny certain facts and expose another, they can broadcast in loop a topic to influence the vision of the mass of viewers (the presidential nominee can pay for more exposure on TV†¦) Media display the way of life for those who follows it. So media can influence the dressing manners of teenagers through theirs favorite film actors, etc. For example, children are specifically targeted in the advertisements . We teach them to eat Mc do’s burgers and drink coca cola. We can say that in this case, PEOPLE ARE BRAINWASHED BY MEDIA. In

Murder of Nixzmary Brown Article Example | Topics and Well Written Essays - 750 words

Murder of Nixzmary Brown - Article Example r old girl who was abused emotionally, sexually, and physically by her step-father over an extended period of time before being murdered at her parent’s home in New York. The suffering endured by the helpless little girl did not appear to go unnoticed. However, the problem was that the adults who noticed welts on her body, or other odd marks, were reluctant to approach the authorities about what they were witnessing. Years before Nixzmary’s step-father finally put an end to her life by viciously hitting her on the head, there were signs that appeared to indicate that Nixzmary was not developing as a healthy normal little girl. According to (Dan, 2006) it was only after Nixzmary’s tragic death that a pattern began to emerge in the sequence of odd coincidences in her earlier life that showed that she was being exposed to extreme suffering. According to Dan (2006) school workers in Nixzmary’s school often reported that she would be absent for extended periods of time. In addition, it was not uncommon for her neighbors to notice unsightly welts among other unexplained injuries on her body. It would seem that Nixzmary was an uncommonly clumsy child because her mother would often state that she fell down, or banged her arm or head on a piece of furniture; thus causing these marks on her body. According to Siegel & Welsh (2009), the family’s neighbors even noticed that Nixzmary was underweight and scrawny for her age. Though child welfare workers were alerted about the case, they did not report any oddities, and left the family to itself. Staff members from Nixzmary’s school even tried to visit her at home when she began to be absent on a regular basis. However, they were stopped from entering the house where the little girl was being systematically tortured unless they could produce a warrant that permitted an investigation. In January, 2005, Nixzmary’s step-father, Cesar Rodriguez came home and found a cup of yoghurt missing (Cohn & Russell, 2012). Upon

Sunday, July 28, 2019

Food Translation Essay Example | Topics and Well Written Essays - 4000 words

Food Translation - Essay Example This essay discusses the translation of recipe and menus, that usually begins with discussing why translation is both a science and art, and the principal issues of subjectivity in translation and interpretation, foreignisation-domestication and visibility-invisibility. The researcher of this essay focuses on the main objective of translating menus and recipes, that is to provide information about the content or ingredients of the food to be cooked and the manner of preparation, as in recipe instructions, in such a manner as to be appetising, moving the reader to try it. One important and crucial application of translation that was discussed in the essay, is in the understanding of recipes and menus because of the nature that food acquires in the mind of the person, be it an American sitting at a restaurant in Cairo or an Egyptian at a restaurant in Glasgow. For both, the menu informs whether the food would agree with the body and, in the case of the Egyptian, also with the soul. The unique nature of every language system poses a paradoxical situation between the use of the common translation principles and translator strategies that were used, especially in the matter of menu and recipe translation. Part of the complex nature of translation work is due to the complexity of the social and cultural meanings of food that are unique to peoples and their geographies. It is also concluded by the researcher that a good translator must know well the translation principles and strategies to do a good and effective job. ... When different cultures interact, each culture develops and changes. Language development gives translation its important role: by allowing one culture to communicate with another, translation improves the way cultures understand and influence each other. That, at least, is the theory. The practice is complex and challenging because in translating from one language to another, it is not easy to capture precisely different cultural identities and make these easier for the other to understand. This is why translation is both a science and an art. Translation is a science because it follows objective rules and methods. It is also an art because it entails the re-production and re-creation of an original work (source text or ST) in a source language (SL) into a target language (TL) in a new work (target text or TT). The translation from ST to TT requires a complex set of knowledge and skills to re-produce the content, spirit, and context of the ST as faithfully as possible to enhance understanding and produce the intended effect. This is not easy because a faithful understanding of a culture is difficult for one not native to it. Translations must reflect the thought, feeling, and style of the SL as faithfully, flexibly, and satisfactorily in the TL, which means the TT must be close to the ST in form and substance, i.e., from the literary and linguistic points of view. Following the simplest rule of communication, the translator confronted with a ST must determine the original author's message, the meaning the author puts into that message, the author's intention, and how the author communicates that message (Venuti, 1995, 1-2). Throughout the whole translation process, the translator has to remember

Saturday, July 27, 2019

Historical leaders in quality improvement Research Paper

Historical leaders in quality improvement - Research Paper Example services, affecting the manners on which risks are perceived, care is organized, and healthcare providers are supported (McLaughlin and Kaluzny, 2006). By recognizing and applying the organizational and production principles in the manufacturing sector, healthcare professionals can improve the delivery of healthcare services suited to the needs of the patient or organization. As such, fitting the curative environment to an individual’s or organization’s needs is important in beating the production goals. This approach can also be employed in the delivery of healthcare services to a single patient or population through a definite disease management program (McLaughlin and Kaluzny, 2006). Thus, reflecting on the lives of healthcare quality leaders is insightful. Florence Nightingale is known as a hospital reformer and a pioneer of nursing. She strived for innovations not only in nursing care, but in hospital administration as well. In 1854, along with well-trained women, Nightingale served the British military hospital during the Crimean War. She documented her observations on the victims and casualties of war by means of statistical applications, treatment, and analysis of mortality and injury cases. She used line diagrams to show the comparison of mortality cases between civilian and military personnel and presented such to government authorities through polar-area diagrams. From 1854 to 1856, in a British military hospital in Turkey, Nightingale led nursing efforts where she prioritized clothing and bedding supplies for the casualties of war and emphasized the need for a more sanitary clinical environment (Knudsen and Debon, 2003). She used to visit wards, even late at night, looking after the conditions of ill soldiers. This exemplary p erseverance, dedication, and patience made her to earn the title â€Å"The Lady with the Lamp.† After six months, the mortality rate in the military camp fell from 60% to 2% (Knudsen and Debon, 2003). Through her efforts,

Friday, July 26, 2019

Public health internship Personal Statement Example | Topics and Well Written Essays - 500 words

Public health internship - Personal Statement Example remains under services or worse, unserviced, due to their lack of health insurance and financial ability to pay out of pocket for their medical needs. As a volunteer of the HRSA, I will be able to help the under privileged community get the health care need that they deserve. It is only fitting that I pay back in this manner, after all, I am being educated at someone elses expense. People understand the need for competent health care but do not have any idea as to how such health programs come about. That is why I am interested in participating in the documentation and research aspect of the program. By assisting the professionals in collating information and writing up their reports, I will be able to gain a more thorough understanding of the complex process that is undertaken by the department of health in order to develop life saving programs for those in need. I am particularly looking forward to participating in the classification projects that will help organize the needs and requirements of the various individuals who come to us seeking medical help. It will be interesting to see how this work is accomplished and how it affects the lives of those who need our help. I am looking forward to becoming an HRSA intern because I know that I will be able to make a difference in the community that I am assigned to. I know that the projects I will be involved in will be one of the most effective ways of getting medical care to the communities that need it the most. We live in difficult financial times. That makes receiving the proper health care all the more difficult for most people. Through my internship at the HRSA, I hope to be able to help ease or alleviate the sense of helplessness that the underprivileged or under insured feel about their status in life. That is why the work that the HRSA does is of vital importance to every citizen of this country. At the end of my internship at the HRSA, I hope to have achieved a level of competency in various work

Thursday, July 25, 2019

Add words to all the subtitles that is in yellow Essay

Add words to all the subtitles that is in yellow - Essay Example Seniors can benefit by learning to do online banking instead of standing in long lines at the bank. Seniors can also sign up for Medicare benefits and make changes online. It is also imperative to consider relative technical understanding of the older generation in regard to technology use as compared to the young people. This will serve as a critical parameter in assessing their level of commitment towards use of technology. The overall attitude and desire to use technology in their banking activities exhibit a uniform pattern. This is because most of the elderly persons are time conscious and physically vulnerable to fatigue from the traditional banking method. Adoption of online registration and subsequent use of such applications is likely to encourage the older generation to embrace it outside the limits of Medicare services and banking to many other activities like insurance services, online notification from various organizations for which they are membership and air travel bookings. The chosen sample size is representative of overall technology perception trend among the total population. The current research is the growing aging population in this country and the increasing need for access technology leadership believe there is a growing gap (Gilly, 2012). Resistance to computer use by specific age-related segments of the consumer population further fueled the computer literacy debate (Gilly, 2012). Defining the attitudes and perspectives of an aging population in regards to their understanding of accessing technology can address foundational problems. The study integrates an ethnographic qualitative view to change leadership perspective to foster improvements in technology for the aging populations (McMurtrey, 2011). The increasing technological advancement which surpasses the rate of familiarity with the aged population is one major challenge. This means a significant technology acquisition time lag which is

Wednesday, July 24, 2019

BUSINESS Report Essay Example | Topics and Well Written Essays - 1000 words

BUSINESS Report - Essay Example Also we can observe that the management has redesigned the job of the workers. The hierarchical structure has been diminished and the teams hold more responsibility. Eventually, this will create new challenges for the members of the team. The jobs of the workers are enriched. The jobs of the team members are halved so that they can concentrate on the development of the team. Every fortnight the team members talk for 45 minutes to solve problems and to gather new ideas. The responsibility of the workers has been increased and the role played by the workers is changed. From mere assembly line work the job now involves various tasks such as planning, organizing, leading and directing. b) The management of BMW has taken the approach to motivate the employees as mentioned in theory Z. Theory Z mentions the major postulates of Japanese management practices and how these practices can be adopted to the environment of other countries. The major features of this theory are building trust, strong bond between the organization and employees, employee involvement and no formal structure. According to this approach, trust is the first primary factor for motivation. Trust between the members of the organization at various levels has to be built through integrity and transparency. At BMW, the work teams have been very effective in building the relationships between the employees across the organization. Another major aspect that has to be noted in this approach is the employee involvement. Any decision affecting the production practices is being done by the team members which increases the motivation of the members. Also this increases the commitment of the employees and gives due recognition to their role. Under this approach the formal structures in an organization are no longer adopted. Here, at BMW, the work teams resolve the issued irrespective of the formal hierarchical structures. The major advantages of

Tuesday, July 23, 2019

Research Project Essay Example | Topics and Well Written Essays - 1250 words - 4

Research Project - Essay Example ucted studies on the relationship between language anxiety and performance have indicated the existence of a negative relationship between language barrier and the overall performance of a learner. To an extent, the effects of language anxiety severely affect the performance of a learner. The extent of these effects is obvious during language tests, when learners are put under the pressure of time constraints and of success. This study focused on the issue of language anxiety and its effects on Saudi learners’ test performance. A questionnaire was used to identify learners’ different levels of language anxiety. Later, the learners were subjected to a standardised test to determine their anxiety levels. The results from the study indicated that the correlation between anxiety scores and test had adverse effects on the students’ performance in the tests. From the findings, suggestions were made on the need for more attention to be made on language anxiety. Methods of reducing language anxiety among students during tests to improve their performance are also suggested. Language anxiety, comprising of the various types of fears, worries, or nervousness related to learning or using a foreign language in communication has been the subject of research for a long time. The feeling of discomfort associated with a person using a foreign language both in learning and communication in comparison to the ease of using mother tongue is justified. Experts in the field of anxiety and psychology hold that learning anxiety successfully has negative effects in the performance of a student, which at times can lead to adverse results. The ease of understanding questions in a test, the time taken in understanding and answering a particular question and the comfort while answering questions are the various factors that affect the level of performance of a student. Time is essential in tests as success is determined by the ability to answer either all or most of the questions. If a

Predictive Methods Essay Example | Topics and Well Written Essays - 250 words

Predictive Methods - Essay Example According to Sandford and Hsu, The Delphi Technique can perform functions such as the exposition of underlying information thus leading to various judgments. It can also educate respondents on the vast interrelated aspects about the topic [3]. In such a case, a group of experts will have to have some experience concerning Ukraine in order to determine viable intentions of the people concerning any impending attack. Such determinations rely on the use of experts with intelligence knowledge such as the Red Team. Such cases require fusion by the relevant teams to identify in the identification of elements within the area of responsibility [4].Therefore; the read team becomes an integral part in the prediction of the study question. The Red Team has shown major advances bot in technicality and methods to handle small warring groups [5]. Read teaming is very fundamental in ensuring information for intelligence collection and analysis and thus the Secretary of Defense should ensure the effective establishment of the team in critical areas such as Ukraine [6]. The Red teams have the expertise suitable to analyze situations from various perspectives such as Ukraine [7]. In such a case, they will be in a position to have sufficient information to tell whether the Russia is in an invasion jeopardy. Red Teaming in the past seemed viable in the identification of potential clashes between different groups [7]. Reliable sources say that it is possible that Ukraine is preparing for an attack in the Russia Islands. However, the sources do not have the provision of a clear cut on such claims thus the Delphi method cannot ascertain the outcome of such

Monday, July 22, 2019

Outsourcing in America Essay Example for Free

Outsourcing in America Essay 1. Introduction In business, in order to provide services or sell products at competitive rate, corporations are to cut unnecessary costs or to focus on core competences in order to reduce the number of human resources and associated costs. Concerning the need to reduce costs coupled with the fierce competition in business, currently, enterprises are striving for finding the best solutions to increase revenue while keeping costs as low as possible. While the matured technology can help enterprises to reach economy of scale, outsourcing of employees (human resources) can be the savior for companies to keep their fixed costs that incurred from employment and research to reach minimum level. Fortunately, in the Internet era where any documentations process can be sent out over the internet and job can be conducted via e-mail or instant messaging, outsourcing-typed employment model have proved to promote significant savings. Concerning the issue, this paper discusses benefits and impacts of outsourcing in the U.S. 2. Outsourcing Government Accountability Office (GAO) says that â€Å"Outsourcing† of services refers to an organization’s purchase from other countries of services that it previously produced or purchased domestically, such as software programming or telephone call center (US Embassy, 2004). Just like other business schemes, outsourcing also gives us advantages and disadvantages as following: 2.1. Advantages of Outsourcing to U.S. Economy In the U.S., the cost of labor has increased significantly. The situation has driven American enterprises, especially ones in information technology segments, to outsource the software development to a developing country like India. The reasons are obvious that the labor cost in India is much cheaper. It makes sense since in today’s economy, companies need to maintain a cost structure that is globally competitive; given that requirement, we can easily guess how businesses will react. Ultimately, free market competition is the ruler of the day, and, while governments may introduce barriers that influence individual situations, there will be no stopping the offshore outsourcing trend. Moreover, McKinsey Co predicts that the Internet-enabled services (ITES) market is likely to touch $142 billion in 2009. There would be a net saving accounting for $390 billion from current cost of $532 billion for these services (Kurian, 2003). U.S. can fulfill the net saving through off shoring to other country like India. 3. Impacts of Outsourcing While such incredible saving might be the concerns of American enterprises, Nasscom quoted Michel Janssen, founder and President, Everest Group, that there is possibility that outsourcing have closed relation to the loss of jobs in the US. Some private researchers predict that outsourcing may eliminate 100,000 to 500,000 IT (information technology) jobs within the next few years, while others note that outsourcing can also generate benefits, such as lower prices, productivity improvements, and overall economic growth. Concerning the situation, Jackson (2005) sees that the outsourcing may lead to increasing import to the U.S. This makes sense while foreign investment is displacing jobs and domestic production, there are possibilities that foreign affiliates increase imports to U.S. parent company. In addition, media and the public reveal that outsourcing leads to worse services or products. With the increasing trend to use outsourcing in some of the core functions of a company (like customer service and hospital staffs), reports about inferior quality caused by outsourcing agreement are growing in number (Dookril, 2004).

Sunday, July 21, 2019

Christianity And Environmental Issues Religion Essay

Christianity And Environmental Issues Religion Essay The Bible calls Christians to be the stewards of the earth. We are called to maintain and protect the earth. God loved His creation. He protected and maintained His creation. We are created in the image of God and as Christians we should love the earth. Genesis 1:26(NIV) says Then God said, Let us make man in our image, in our likeness, and let them rule over the fish of the sea and the birds of the air, over the livestock, over all the earth, and over all the creatures that move along the ground.' God did not give us earth to man so man could consume all the resources over all the earth. We have to make a suitable environment for all of Gods creatures. Christians are called and can change the earth through awareness, conservation, pollution control and environmental restoration. Christians have to take the lead in creating awareness of the earths environmental problems. Rev. Tom Wenig, pastor of Lutheran Church of Our Redeemer, says that [environmental awareness] is really trying to develop a mind-set. It isnt trying to take on a big political agenda. First, the church has to realize that they are called by God to sustain and protect their environment called earth. For decades, Christians have ignored this issue. One article that was published in Science stated that Christians have ignored environmental issues and have helped create environmental problems. But the church is responsible for caring for Gods creation. The church should freely share the importance of conserving resources, recycling, donating, and other ideas to preserve our environment. The church should also reach out to non-Christians and make them aware of the importance of taking care of our earth. Christians and churches are just beginning to work together to help make positive and meaning ful changes to our world. The hope is that this trend continues and a national and global support continues to grow toward the awareness. Christians can have a positive impact on plants and animals through conservation. Psalm 145:9-10(NIV) says The Lord is good to all; he has compassion on all he has made. All you have made will praise you, o Lord; your saints will extol you. We can conserve energy by turning things off and not using them and turning down the thermostat at home. We can conserve water by taking shorter showers, installing water saving showerheads, and not watering our lawns. We can use energy efficient light bulbs and insolate our water heater which saves on electricity. We can insure our houses are suitable to weather. We make sure it is energy efficient and no energy is seeping out. If we all work together to conserve and are aware, we can help to preserve the environment. Controlling pollution anyway we can as individuals and Christians will benefit and preserve the environment. Matt Farina says that God loved us enough to make this world for us. The least we can do is care for it. Christians can make a great impact through recycling. A survey from the Department of Ecology in the state of Washington showed that 12,842 aluminum cans were recycled in 2008. Recycling helped save 2,658,142 British Thermal Units (BTU). Recycling also helped to avoid 47,882 greenhouse gas emissions (GHG). Seattle recycles 44 percent of its trash and is aiming for 75 percent by 2025. We can recycle bottles, metal, cans, and trash to make materials reusable. Riding bikes or walking instead of driving will decrease the use of gas. Carpooling instead of taking individual vehicles can save gas. Sending emails on a computer instead of writing letters will save the amount of trees being used for paper. Handling chemicals properly will decrease the amount of filthy air that people are using. We all can help the environment if we learn to control pollution. As Christians, we need to step away from our cultural barriers and support environmental restoration. Adam Clarke compares environmental restoration to the restoration of man. He states This perfection is the restoration of man to the state of holiness from which he fell, by creating him anew in Christ Jesus, and restoring to him that image and likeness of God which he has lost. As Christians, we can support this cause by planting gardens and not relying on the supermarkets to provide fruits and vegetables. Once our gardens are harvested we should donate our fruits and vegetables to others to share with. Some people criticize Christians and non-Christians as environmental extremists. Some individuals say that the claims regarding the environment and its deteriorating state as false and exaggerated claims. Other individuals believe change is constant and inevitable and that man is powerless to do anything about it. That cannot be any farther from the truth. More and more people are dying or catching diseases due to environmental hazards. These issues are not limited to America. This issue is far worse in third world countries. Churches that take part in mission trips visit countries that have massive famine in the land. Nicaragua is one of the poorest countries in the Americas and 47% of its population is below the poverty level. In countries such as Nicaragua, the famine is so massive that adults and children are living in dumps. There is no clean water to drink and there is almost no healthy food to eat. Third world countries are in serious need of environmental restoration. Restoration to these countries can happen in events like visiting these countries on mission trips. Feeding kids, building houses, and providing clean water is a great way for Christians and non-Christians to restore the environment. People who criticize Christians and non-Christians for being environmental extremists have not seen the impact. They have not witnessed the chemical diseases that come with pollution in America and the famine in third world countries. There are many opportunities in these cultures for Christians and non-Christians alike to restore the environment. The environment and its restoration is a major issue. People are contracting diseases from toxic chemicals in the pollution that is being put out. People in third world countries are dying due to the famine in the land.

Saturday, July 20, 2019

Corrections for attenuation and corrections for range restriction

Corrections for attenuation and corrections for range restriction One of the most pervasive methodological problems in the educational and psychological field entails determination of the techniques which are to be used in assessing the nature and strength of the relationship between various measures. Of course, the correlation coefficient has provided the field with a viable statistical tool for solving this problem. Unfortunately, in some instances the appropriateness of correlational techniques may be limited by the operation of certain statistical biases in actual data bases. Thorndike (1949) has noted that two of these biases, termed range restriction and attenuation effects, can exert a powerful diminishing influence on the magnitude of observed correlation coefficients. Range restriction occurs when a researcher wants to estimate the correlation between two variables (x and y) in a population, but subjects are selected on x, and data for y are only available for a selected sample (Raju Brand, 2003). This occurs for example when scores from admission tests are used to predict academic success in higher education or are compared with grades in the program they were admitted to (Gulliksen, 1950; Thorndike, 1949). Because selection is made on the basis of scores from these kinds of instruments, the range of scores is restricted in the sample. Although the correlation between test scores and academic success can be obtained for the restricted sample, the correlation for the population of applicants remains unknown. Due to the range restriction in test scores, the correlation obtained is expected to be an underestimate of the correlation in the population (Hunter Schmidt, 1990; Henriksson Wolming, 1998). Attenuation effects refer to the fact that an observed correlation coefficient will tend to underestimate the true magnitude of the relationship between two variables to the extent that these measures are not an accurate reflection of true variation, i.e., to the extent that they are unreliable. In some applied studies, the operation of these biases may be acceptable. Yet when an investigation centers on determining the true strength of the relationship between two sets of measures, the operation of these biases in the experimental data base constitutes a serious, often unavoidable, confound (Crocker Algina, 1986; Worthen, White, Fan, Sudweeks, 1999). Psychometrics has long been aware of the implications of range restriction and attenuation effects with respect to the inferences drawn by researchers concerning the magnitude of relationships. Consequently, a variety of formulas have been derived which permit the researcher to correct data based estimates of the magnitude of a correlation coefficient for the operation of these influences (Guilford, 1954; Stanley, 1971). The aim of this review is to discuss the importance of correcting for range restriction and correcting for attenuation in predictive validity studies and review two methods to correction for range restriction (Thorndikes case II and ML estimates obtained from the EM algorithm) and two methods to correction for attenuation (traditional approach and latent variable modeling approach). Results from research evaluating the use of these methods will also be discussed. Importance of corrections for range restriction and attenuation effects As early as the beginning of the last century, Pearson (1903), in developing the Pearson product-moment correlation coefficient, noticed problems due to range restriction and attenuation and discussed possible solutions. Since then, a great number of studies have examined the biasing effect of these statistical artifacts (e.g., Alexander, 1988; Dunbar Linn, 1991; Lawley, 1943; Linn, Harnisch, Dunbar, 1981; Schmidt, Hunter, Urry, 1976; Thorndike, 1949; Sackett Yang, 2000). It is evident from literature that both range restriction and attenuation can create serious inaccuracies in empirical research, especially in the fields of employment and educational selection. The need for correcting validity coefficients for statistical artifacts is becoming more recognized. Validity generalization research has demonstrated that artifacts like range restriction and attenuation account for large percentages of the variance in distributions of validity coefficients. Although the Society for Industrial and Organizational Psychologys (SIOP) Principles (1987) recommend correcting validity coefficients for both range restriction and criterion unreliability, researchers rarely do so. Ree et al. (1994) discussed the application of range restriction corrections in validation research. They reviewed validity articles published in Educational and Psychological Measurement, Journal of Applied Psychology, and Personnel Psychology between 1988 and 1992. Ree et al. (1994) concluded that only 4% of the articles dealing with validation topics applied range restriction corrections. Researchers may be reluctant to apply corrections for range restriction and attenuation for several reasons. Seymour (1988) referred to statistical corrections as hydraulic, implying that researchers can achieve a desired result by pumping up the corrections. Another reason for reluctance in applying corrections may be because the APA Standards (1974) stated that correlations should not be doubly corrected for attenuation and range restriction. The more current Standards (1985), however, endorse such corrections. A third reason for not using the corrections is that knowledge of unrestricted standard deviations is often lacking (Ree et al., 1994). Finally, researchers may be concerned that in applying corrections to correlation coefficients, they may inadvertently overcorrect. Linn et al. (1981) stated that, procedures for correcting correlations for range restriction are desperately needed in highly selective situations (i.e., where selection ratios are low) (p. 661). They continued, The results also clearly support the conclusion that corrections for range restriction that treat the predictor as the sole explicit selection variable are too small. Because of this undercorrection, the resulting estimates still provide a conservative indication of the predictive value of the predictor (p. 661). Linn et al. stated that ignoring range restriction and/or attenuation corrections because they may be too large is overly cautious. They suggested the routine reporting of both observed and corrected correlations. Both observed and corrected correlations should be reported because there is no significance test for corrected correlations (Ree et al., 1994). Based on the logic and suggestions from literature, there appear to be a number of reasons to correct for restriction of range and attenuation in predictive validity studies. These corrections could be used to adjust the observed correlations for biases, and thus yield more accurate results. Correction Methods for Range Restriction There are several methods for correcting correlations for range restriction. This review is meant to examine two approaches to correction for range restriction; Thorndikes case II and ML estimates obtained from the EM algorithm. These methods will be described first, and then results from research evaluating their use will be discussed. Thorndikes case II Thorndikes (1949) Case II is the most commonly used range restriction correction formula in an explicit selection scenario. Explicit selection is a process, based on the predictor x, that restricts the availability of the criterion y. The criterion is only available (measured) for the selected individuals. For example, consider the seemingly straightforward case where there is direct selection on x (e.g., no one with a test score below a specified cutoff on x is selected into the organization) (Mendoza, 1993). Thorndikes Case II equation can be written as follows Rxy = where Rxy = the validity corrected for range restriction; rxy = the observed validity in the restricted group; and ux = sx/Sx, where sx and Sx are the restricted and unrestricted SDs of x, respectively. Both the restricted and unrestricted SDs of x are available at hand. The use of this formula requires that the unrestricted, or population, variance of x be known. Although often this is known, as in the case of a predictive study where all applicants are tested and test data on all applicants are retained, it is not uncommon to encounter the situation in which test data on applicants who were not selected are discarded and thus are not available to the researcher who later wishes to correct the sample validity coefficient for range restriction (Sackett and Yang, 2000). Issues with Thorndikes case II method Thorndikes Case II is by far the most widely used correction method. It is appropriate under the condition of direct range restriction (a situation where applicants are selected directly on test scores). Researchers used it and proved its appropriateness. For example, Chernyshenko and Ones (1999) and Wiberg and Sundstrà ¶m (2009) showed that this formula produced close estimates of correlation in a population. Although the use of Thorndikes Case II formula is straightforward, this formula imposes some requirements. First, it requires that the unrestricted, or population, variance of x be known. Second, the formula requires that there is no additional range restriction on additional variables. If the organization also imposes an additional cutoff, such as a minimum education requirement, applying the Case II formula produces a biased result. In this example, if education level (z) and test score (x) are known for all applicants, a method for solving the problem exists (Aitken, 1934). Third, the correction formula requires two assumptions: that the x-y relationship is linear throughout the range of scores (i.e., the assumption of linearity) and that the error term is the same in the restricted sample and in the population (i.e., the assumption of homoscedasticity). Note that no normality assumption is required for the formula (Lawley, 1943). Another issue that was found in literature with this method arises when it is applied for indirect restriction of range (a case where the applicants are selected on another variable that is correlated with the test scores) even though it has been shown to underestimate validity coefficients (Hunter Schmidt, 2004, Ch. 5; Hunter et al., 2006; Linn et al., 1981; Schmidt, Hunter, Pearlman, Hirsh, 1985, p. 751). Maximum Likelihood estimates obtained from the Expectation Maximization algorithm Using this approach, the selection mechanism is viewed as a missing data mechanism, i.e. the selection mechanism is viewed as missing, and the missing values are estimated before estimating the correlation. By viewing it as a special case of missing data, we can borrow from a rich body of statistical methods; for an overview see e.g. Little Rubin (2002), Little (1992) or Schafer Graham (2002). There are three general missing data situations; MCAR, MAR and MNAR. Assume X is a variable that is known for all examinees and Y is the variable of interest with missing values for some examinees. MCAR means that the data is Missing Completely At Random, i.e. the missing data distribution does not depend on the observed or missing values. In other words, the probability of missingness in data Y is unrelated to X and Y. MAR means that the data is Missing At Random, i.e. the conditional distribution of data being missing given the observed and missing values depends only on the observed values and not on the missing values. In other words, the probability of missingness in data Y is related to X, but not to Y. MNAR means that data is Missing Not At Random. In other words, the probability of missingness on Y is related to the unobserved values of Y (Little Rubin, 2002; Schafer Graham, 2002). If the data is either MCAR or MAR, we can use imputation methods to replace missing data with estimates. In predictive studies, the selection mechanism that is based solely on X, the data is considered to be MAR (Mendoza, 1993). Using this approach, we can use information on some of the other variables to impute new values. Herzog Rubin (1983) stated that by using imputation one can apply existing analysis tools to any dataset with missing observations and use the same structure and output. There are several different techniques that use imputation to replace missing values. The most commonly applied techniques are mean imputation, hot-deck imputation, cold-deck imputation, regression imputation and multiple imputations (Madow, Olkin, Rubin, 1983; Sà ¤rndal, Swensson, Wretman, 1992). In general, imputation may cause distortions in the distribution of a study variable or in the relationship between two or more variables. This disadvantage can be diminished when e.g. multiple regression imputation is used (Sà ¤rndal et al., 1992). For example, Gustafsson Reuterberg (2000) used regression to impute missing values in order to get a more realistic view of the relationship between grades in upper secondary schools in Sweden and the Swedish Scholastic Achievement Test. Note that regression imputation is questionable to use, because all imputed values fall directly on the regression line, the imputed data lack variability that would be present had both X and Y been collect ed. In other words the correlation would be 1.0 if only computed with imputed values (Little Rubin, 2002). Therefore literature suggest using imputed Maximum Likelihood (ML) estimates for the missing values that are obtained using the Expectation Maximization (EM) algorithm (Dempster, Laird, Rubin, 1977). Maximum likelihood (ML) estimates obtained from the Expectation Maximization (EM) algorithm is imputed for the criterion variable for examinees who failed the selection test for example (Dempster et al., 1977; Little, 1992). The complete and incomplete cases were used together as the EM algorithm reestimates means, variances and covariances until the process converges. The base of EM missing values is an iterative regression imputation. The final estimated moments are the EM estimates including estimates for the correlation. For an extensive description see SPSS (2002). The idea is that the missing Y values are imputed using the following equation where and are the estimates obtained from the final iteration of the EM algorithm. Schaffer and Graham (2002) suggested that using EM imputation is valid when examining missing data. Issues with ML estimates obtained from the EM algorithm method This approach is seldom used with range restriction problems, although it has been mentioned as a possibility (Mendoza, 1993). In a more recent study, Mendoza, Bard, Mumford, Ang, (2004) concluded that the ML estimates obtained from the EM algorithm procedure produced far more accurate results. Wiberg and Sundstrà ¶m (2009) evaluated this approach in an empirical study and their results indicated that ML estimates obtained from the EM algorithm seem to be a very effective method of estimating the population correlation. Since there is not much work in literature examining the appropriateness and effectiveness of this approach, many questions need to be answered when using ML estimates obtained from the EM algorithm for correction for range restriction. Many researches need to evaluate the use of this approach in areas that are of special interest include simulations of different population correlations and different selection proportions when using the missing data approach. Regarding the EM imputation approach, one important research question is how many cases can be imputed  [1]  at the same time as we obtain a good estimate of the population correlation. Correction Methods for Attenuation In educational and psychological research, it is well known that measurement unreliability, that is, measurement error, attenuates the statistical relationship between two composites (e.g., Crocker Algina, 1986; Worthen, White, Fan, Sudweeks, 1999). In this review, two approaches for correcting attenuation effects caused by measurement error; traditional approach and latent variable modeling approach, will be described and results from research evaluating their use will be discussed. Traditional approach In classical test theory, the issue of attenuation of correlation between two composites caused by measurement unreliability is usually discussed within the context of score reliability and validity. More specifically, if there are two measured variables x and y, their correlation is estimated by the Pearson correlation coefficient rxy from a sample. Because the measured variables x and y contain random measurement error, this correlation coefficient rxy is typically lower than the correlation coefficient between the true scores of the variables Tx and Ty (rTx,Ty) (Fan, 2003). When Spearman first proposed the correction for attenuation, he advocated correcting for both the predictor and the criterion variables for unreliability. His equation, rTx,Ty = , is known as double correction. The double correction performed on the obtained validity coefficient reveals what the relationship would be between two variables if both were measured with perfect reliability. Because measurement error truncates, or reduces, the size of the obtained validity coefficient, the effect of the correction is to elevate the magnitude of the corrected validity coefficient above the magnitude of the obtained validity coefficient. The lower the reliability of the predictor and/or criterion variables, the greater will be the elevation of the correction. If both the test and the criterion exhibit very high reliability, the denominator of the equation will be close to unity, thus rTx,Ty à ¢Ã¢â‚¬ °Ã‹â€  . The double correction formula was followed by the single correction formula as researchers began to shift the emphasis from test construction to issues of using tests to predict criteria. As the name implies, the formula involves correcting for unreliability in only one of the two variables. The formula would be either rTx,Ty = (correcting for unreliability in the criterion variable only) or rTx,Ty = (correcting for unreliability in the predictor variable only). The rationale for the single correction of the criterion unreliability was best stated by Guilford (1954): In predicting criterion measures from test scores, one should not make a complete [double] correction for attenuation. Corrections should be made in the criterion only. On the one hand it is not a fallible criterion that we should aim to predict, including all its errors; it is a true criterion or the true component of the obtained criterion. On the other hand, we should not correct for errors in the test, because it is the fallible scores from which we must make predictions. We never know the true scores from which to predict. (p. 401) Although most researchers have adopted Guilfords position on correcting only for criterion unreliability, there have been cases where correcting only for unreliability in the predictor was used. However, these occasions appear to be special cases of double correction, where either the reliability of the criterion was unknown or where the criterion was assumed to be measured with perfect reliability. The former situation was not unusual. We often know more about the reliability of tests than the reliability of criteria. The later situation is more unusual in that variables are rarely assessed with perfect reliability. Issues with traditional approach The correction for attenuation due to measurement error is one of the earliest applications of true-score theory (Spearman, 1904) and has been the subject of numerous debates, spurring criticisms from its very inception (e.g., Pearson, 1904). Despite this, no real consensus on correction for attenuation has emerged in the literature, and many ambiguities regarding its application remain. One of the early criticisms is corrected validity coefficients greater than one. Although it is theoretically impossible to have a validity coefficient in excess of 1.00, it is empirically possible to compute such a coefficient using Spearman correction formula. For example, if = .65, = .81, and = .49, rTx,Ty = 1.03 The value of 1.03 is theoretically impossible because valid variance  [2]  would exceed obtained variance (error variance). Psychometricians have offered various explanations for this phenomenon. Before the year ended, Karl Pearson (1904, in his appendix) had declared that any formula that produced correlation coefficients greater than one must have been improperly derived; however, no errors were subsequently found in Spearmans formula. This led to debate over both how correction for attenuation could result in a correlation greater than one and whether a procedure that often resulted in a correlation greater than one was valid. Many explanations for correction for attenuations supposed flaw have been suggested. Error in estimating reliability. Many statistics used to estimate reliability are known to regularly underestimate reliability (i.e., overestimate the amount of error; Johnson, 1944; Osburn, 2000). Whereas this bias is tolerated as being in the preferred direction for some applications (as when a researcher wants to guarantee a minimum reliability), the result of correction for attenuation is inflated if the denominator entered into the equation is less than the accurate value (Winne Belfry, 1982). Other researchers have shown that some reliability estimates can overestimate reliability when transient errors are present; however, it has been argued that this effect is probably small in practice (Schmidt Hunter, 1996, 1999). Normal effects of sampling process. Others, including Spearman (1910), have attempted to explain corrected correlations greater than one as the normal result of sampling error. Worded more explicitly, this asserts that a corrected correlation of 1.03 should fall within the sampling distribution of corrected correlations produced by a population with a true-score correlation less than or equal to one. Despite this, it was some time before researchers first began to examine the sampling distributions of corrected correlations. However, some early studies that have examined the accuracy of correction for attenuation are of note  [3]  . Misunderstanding of random error. Thorndike (1907) applied multiple simulated error sets to a single set of true-score values and concluded that the equation for correction for attenuation worked reasonably well. Johnson (1944) extended this study and demonstrated that random errors would occasionally raise the level of observed correlations above the true-score correlation. In those cases, the equation to correct for attenuation corrects in the wrong direction. Johnsons conclusion that Corrected coefficients greater than one are caused by fluctuations in observed coefficients due to errors of measurement and not by fluctuations caused by errors of sampling, as suggested by Spearman (Johnson, 1944, p. 536). Garside (1958) referenced the various bases of error variance in the coefficients as function fluctuations. Latent variable modeling approach Latent variable approach is considered when a multifactorial test is used in the admission of students to various schools. Most often a composite measure related to the total test score or subtests are used in such prediction. The use of a multiple factor latent variable model for the observed variables comprising the test can make more efficient use of the test information. Correctly assessing the predictive validity in traditional selection studies, without latent variables, is a difficult task involving adjustments to circumvent the selective nature of the sample to be used for the validation. Latent variable modeling of the components of a test in relation to a criterion variable provides more precise predictor variables, and may include factors which have a small number of measurements. For many ability and aptitude tests it is relevant to postulate a model with both a general factor influencing all components of the test, and specific factors influencing more narrow subsets (Fan, 2003). In confirmatory factor analysis where each latent factor has multiple indicators, measurement errors are explicitly modeled in the process. The relationship between such latent factors can be considered as free from the attenuation caused by the measurement error. For example, The  GMAT exam is a standardized assessment that helps business schools assess the qualifications of applicants for advanced study in business and management. The GMAT exam measures three areas; Verbal, Quantitative Reasoning, and Analytical Writing Skills. To illustrate the point, lets look at the verbal exam. The verbal exam measures three related latent variables (Critical Reasoning (), Reading Comprehension (), Grammar and Sentence Structure ()). Each of these variables has many indicators. In such model, is considered to represent the true relationship between the three latent variables (, ,, respectively) that is not attenuated by the measurement error ( to ). This approach for obtaining measurement-err or-free relationship between factors is well-known in the area of structural equation modeling but is rarely discussed within the context of measurement reliability and validity. Using this approach, once the interitem correlation is obtained, the population reliability in the form of Cronbachs coefficient alpha  [4]  could be obtained. Cronbachs coefficient alpha takes the form  Ã‚ ¡ = ) where k is the number of items within a composite, is the sum of item variances, and is the variance of the composite score. The variance of the compositeis simply the sum of item variances ( ) and the sum of item covariances (2). = + 2. The population intervariable correlation is obtained from the two-factor model in the Figure above based on the following (Jà ¶reskog Sà ¶rbom, 1989): ÃŽÂ £ = ΆºÃƒÅ½Ã‚ ¦ÃƒÅ½Ã¢â‚¬ ºÃƒ ¢Ã¢â€š ¬Ã‚ ² + ÃŽËÅ" where ÃŽÂ £ is the population covariance matrix (correlation matrix for our standardized variables), Άº is the matrix of population pattern coefficients, ÃŽÂ ¦ is the population correlation matrix for the two factors, and ÃŽËÅ" is the covariance matrix of population residuals for the items. Issues with latent variable modeling approach This approach for obtaining measurement-error-free correlation coefficients is well known in the area of structural modeling, but it is rarely discussed within the context of measurement reliability and validity. Fan (2003) used this approach to correct for attenuation and showed that this approach provided not only near identical and unbiased means but also near identical confidence intervals for the sampling distribution of the corrected correlation coefficients. It is pointed out, however, that the latent variable modeling approach may be less applicable in research practice due to more difficult data conditions at the item level in research practice. DeShon (1998) stated that latent variable modeling approach provides a mathematically rigorous method for correcting relationships among latent variables for measurement error in the indicators of the latent variables. However, this approach can only use the information provided to correct for attenuation in a relationship. It is not an all-powerful technique that corrects for all sources of measurement error. Conclusion It has long been recognized that insufficient variability in a sample will restrict the observed magnitude of a Pearson product moment coefficient. Since R. L. Thorndikes days, researchers have been correcting correlation coefficients for attenuation and/or restriction in range. The topic has received considerable attention (Bobko, 1983; Callender Osborn, 1980; Lee, Miller, Graham, 1982; Schmidt Hunter, 1977) and today correlation coefficients are corrected for attenuation and range restriction in a variety of situations. These include test validation, selection, and validity generalization studies (meta-analysis; Hedges Olkin, 1985), such as those conducted by Hunter, Schmidt, and Jackson (1982). For example, Pearlman, Schmidt, and Hunter (1980) corrected the mean correlation coefficient in their validity generalization study of job proficiency in clerical occupations for predictor and criterion unreliability as well as for range restriction on the predictor. There are several methods that can be used to correct correlations for attenuation and range restriction, and some have been more frequently used than others. For correction for attenuation, the traditional method for correcting for attenuation is the best known and is easy to use. However, in more complex modeling situations it is probably easier to adopt an SEM approach to assessing relationships between variables with measurement errors removed than to try to apply the traditional formula on many relationships simultaneously. Fan (2003) shows that the SEM approach (at least in the CFA context) produces equivalent results to the application of the traditional method. For correction for range restriction, the Thorndike case II method has been shown to produce close estimates of the correlation in a population (Hunter Schmidt, 1990). Wiberg and Sundstrà ¶m (2009) show that ML estimates obtained from the EM algorithm approach provides a very good estimate of the correlation in the u nrestricted sample as well. However, because the ML estimates obtained from the EM algorithm approach is not commonly used in range restriction studies, the usefulness and accuracy of this method should be further examined. Using an appropriate method for correcting for attenuation and range restriction is most important when conducting predictive validity studies of instruments used, for example, for selection to higher education or employment selection. The use of inappropriate methods for statistical artifacts correction or no correction method at all could result in invalid conclusions about test quality. Thus, carefully considering methods for correcting for attenuation and range restriction in correlation studies is an important validity issue. The literature reviewed here clearly suggests that practitioners should apply attenuation and range restriction corrections whenever possible, even if the study does not focus on measurement issues (American Educational Research Association, American Psychological Association, National Council on Measurement in Education, 1999).