Tuesday, April 7, 2020

Modern Organizational Theory Vs. Improvisation Essays -

Modern Organizational Theory Vs. Improvisation Modern Organizational Theory vs. Improvisation Organization theory deals with the formal structure, internal working, and external environment of complex human behavior within organizations. As a field spanning several disciplines, it prescribes how work and workers out to be organized and attempts to explain the actual consequences of organizational behavior (including individual behavior) on work done and on the organization itself.(Gordon and Milkavoich, 147) It has been evolving for centuries on how should work be done in the public administration and how the organization should be. Research findings have emerged about what motivates workers and how different incentives affect various tasks, employees, and situations; and the environments in which they operate. (Gordon and Milakovich, 147) Even with all those research statistics and different modes of thought toward organization there are still situations in which the rational approach to public decisions does not help. For instance, what if the environment is instable and ha s no guidelines or precedents to follow? In the case of Israel, improvisation has changed the organization of public administrations, uprooted the conventional models for policymaking, and strayed off from the Weberian model of administration. This kind of improvisation is the product of cultural and personal predilections and environmental circumstances,(Sharkansky and Zalmanovitch, 1) The use of improvisation is dependent on the culture and the environment in which policy decisions are made. For example, the use of improvisation is found more so in Spanish managers. Why? Spanish mangers express an explicit preference for spontaneous, improvised managerial style over the methodical and formal planning favored by their American, English, and Dutch counterparts. (Sharkansky and Zalmanovitch, 2) Another proponent of improvisational technique is the Israelis. Improvisation is made inevitable in a situation where problems must be dealt with expediently and on the spur of the moment. Taking into consideration the conflict between the Arabs and the Israelis, improvisation is essential to running administrations. The formal theory such of Max Weber cannot apply since its framework of rules and procedures are to ensure stability, predictability, and reliability of performance; yet, with no stability or predictability in the environment these theories only fall short of thei r expectations. Rational panning has already been pointed out a century ago by Herbert Simon (1976) to be bounded by many factors such as: skills, habits, and reflexes, values, etc, which makes it impossible to achieve rational planning that is suited for the situationMoreover, rational planning does not have primary value in Israels cultural heritage. Survival in the Diaspora often depended on an ability to act quickly, with limited resources, under harsh, changing, and uncertain conditions.(Sharkansky and Zalmanovitch, 2) With endless terrorism, continuous war, and a population growth that shifts from month to month there is the perpetual challenge to respond to each situation expediently and ingeniously. Even with the scientific management organizational theory, the formal structure and rules, the highly centralized top management levels, and especially the standardizing procedures would make policy-making decisions disastrous for Israel. This mode of organization is to increase productivity, thus profits. Yet, Israels leadership has to consistently accomplish a wide variety of expensive goals with limited means. These goals included creating the infrastructure of a modem, industrial society in an undeveloped setting; creating a welfare state which could house, educate, and provide employment and healthcare for successive waves of immigrants and their children; and provide its citizens with a decent standard of living. (Sharkansky and Zalmanovitch, 3) With all of these pressures to be done on an econ omy of scarcity, the profit idea fits nowhere. Improvisation began in Israel since 1967, the day the city was united under total Israeli control. It involved deviating Israels formal policy to keep the peace. With the Arabs fighting for their land back and Israels strong willed notion to maintain all of the land under Israeli rule, it proved to be quite a task. Not only did the government had to improvise to ease the tension between the Arabs and the Israelis, they also had to settle the demands of the Religious and the Secular people. To maintain a harmonious environment the public decision makers had to improvise a way to keep the religious and secular Jews on the same level, whereby initiating one approach

Monday, March 9, 2020

Dilemma currently faced by the RBA Example

Dilemma currently faced by the RBA Example Dilemma currently faced by the RBA – Essay Example Dilemma Currently Faced by the RBA The Reserve Bank of Australia met on March 6, to discuss the dilemma currently facing its conduct of monetary policy. The RBA recognized that while the global economic was in recession most major regions had witnessed steady improvements. While major downside risks remained, the probability of another major catastrophe seemed extremely limited. Domestically, however, the committee recognized that the economic continued to undergo significant structural adjustment because of high terms of trade and the accompanying high exchange rate. As a result it became the RBA’s dilemma to decide whether they should change interest rates. There were a number of issues the RBA had to consider in making their decision. One of the central concerns in these regards was the determination of whether the adjustment was occurring at a pace that kept the country close to trend and inflation in the target range. The board looked at different sectors of the economy and recognized that while the housing sector was in decline the mining and service sectors were expanding. The board also examined behavior from the major banks. They noted that they had passed on many of their higher funding cost pressures; these figures did not indicate anything out of the ordinary. While these conditions seemed apparent there were members that considered their assessment mechanisms might not be entirely accurate. For instance, disparate forces such as the large rise in resource investments and the high exchange rate could have potential impacts. While most of these domestic indicators appeared stable, the RBA recognized that international factors could potentially create an adverse impact. Most central to their concerns was the tumultuous situation in Europe. They recognized that this situation, as it impacts the trade of flow throughout the globe, could ultimately impact Australia. The main link to the Australian economy would be if Europe enacted a slowdown in East Asia, which in turned reduced Australian exports. Specifically, this chain effect would potentially limit demand for commodity prices. Continuing with European concerns, the RBA recognized that a slowdown in European markets or even collapse could potentially reduce global capital investment; this would then impact the exchange rate and consumer confidence. Still, the RBA recognized that as long as inflation remained stable they would be able to counter such a slowdown with specific policy measures. In conclusion, this essay has considered the dilemma the Reserve Bank of Australia faced at their March 6, 2012 meeting. The report has demonstrated that the RBA considered both international and domestic factors. Specifically, there was concern over the economy’s structural adjustment to the very high terms of trade and the accompanying high exchange rate. The main challenge to these adjustments was the potential uncertainty in Europe which could potentially impact a situation where demand for Australian exports declined. Ultimately, the RBA concluded there was no significant action that needed to be taken and decided to leave the cash rate unchanged at 4.25 per cent. References ‘Minutes of the Monetary Policy Meeting of the Reserve Bank Board.’ (2012)

Friday, February 21, 2020

HD-DVD vs. BLU-RAY Essay Example | Topics and Well Written Essays - 1750 words

HD-DVD vs. BLU-RAY - Essay Example Weaknesses are happening on a couple of fronts. On the content development front, the DVD authoring front, we continue to see extreme price pressure in authoring services. The price of DVD authoring software has come down. The fact that more and more people are doing it puts incredible pressure on the production community to develop a quality product that is certifiable over all of the various players, delivers the same experience to the customer, and builds and maintains margins in their production services. (Sweeting, 2004, p7-46) On the other side, when we look at the next generation high definition DVD disc, which will certainly be all the rage in 2006, a weakness there is how soon the consumer will embrace this technology, particularly if there are two formats. As we all know, DVD was the most successful consumer product ever launched, but if you look at it now, most people have DVD players and are very content with the experience that they get in watching DVDs in their home. How quickly will they want to purchase a more expensive DVD player to play high definition material is something everyone is grappling with right now. (Capps, 2005) It is likely that moving forward the next wave [of opportunity] will fall to special interest categories and even corporate video--outreach and recruiting applications. Certainly, the opportunities for DVD in 2006 are tremendous. Beyond the applications in the corporate, marketing and outreach programs, we'll see special niche DVDs having greater acceptance. Then, of course, we have the whole next-generation issue, which hopefully will come to the forefront in 2006. The threat, especially for high definition, will definitely be how quickly the consumer will embrace that technology, particularly if there are two competing formats. You'll also have, on another front, a continuing piracy concern. That will bring pressure on studios and on the production community to figure out ways to safeguard the transmission of the information and the actual content on the DVD. We will see more movies being released more quickly to DVD. We will see the continued growth of TV programming and music on DVD, We will see the emerging market of 'special interest' DVDs [from educational, travel, marketing, outreach, recruiting, etc.]. And of course we will see the next generation of DVDs, in high definition.Blu Ray will win the high definition DVD arms race, but my guess is that it will take some time for the 75 percent of households who already own a standard definition DVD player to slowly warm up to parting with the cash to upgrade. In all likelihood the adoption wi ll be much slower than standard DVD. The entire industry is holding off until March to launch both formats [DVD HD and Blu Ray]. So we won't be able to see until March of this year how well the fall of this year will be. Also, how fast will HD disc formats be adopted by consumers It took four years for consumers to adopt DVD; will it take six years to get them over to HD We don't really know. (Laser Focus World, 2004. p11-11) A cross-industry debate over the next-generation high-definition optical-disk format turned uglier after Microsoft and Intel publicly backed the HD-DVD standard over its Blu-ray rival. Moving beyond the turf war talk of whether PCs or consumer electronics will rule the digital living room, the HD-DVD vs. Blu-ray battle

Wednesday, February 5, 2020

The effects of light and darkness on harvester ants and their ability Lab Report

The effects of light and darkness on harvester ants and their ability to dig tunnels in dry and moist sand - Lab Report Example They are also favored in exposed and open areas whereby their nests can be about 4.5 meters deep underground (MacKay, 1981). The experiment was divided into 3 groups including control group and experimental group. The two experimental groups had both moist and dry sand but one had darkness and light while the other had darkness. Basing on the results, it was observed that there was a difference on the effect of light and darkness on the ability of the ant to dig the tunnel. Also the ants were observed to have a great ability to dig tunnels in dry sand than wet sand. The light promoted the ants in the digging of the tunnels. Therefore, it can be concluded that light supports the ant in digging the tunnels while darkness does not. This is clearly evident in the group of darkness. In this case, the measurements of the tunnel are lower in this group than that of light group both for dry and wet sand. It was also observed that the ants were more capable of digging tunnels in dry sand than in wet sand. This is true because according to the literature, it is well known that these ants typically live in dry deser t conditions. They are also favored in exposed and open areas whereby their nests can be about 4.5 meters deep underground (Lavigne, 1969). The results of the experiment were satisfactory because they were in line with the literature. They tend not to be confirmatory results and they may act as a basis for other further studies. This is because most of the issues were not considered to mimic the natural environment of the ants and this can affect their natural behavior, hence, leading to significant errors in the experiment. Thus, this can result to unscrupulous conclusion. Also the number of the ants needs to be considered, in the experiment the number of ants used tends not to be appropriate to give excellent results. In conclusion, the results mean that light and darkness have an impact on the ants’ behavior

Tuesday, January 28, 2020

Cache Memory Plays A Lead Role Information Technology Essay

Cache Memory Plays A Lead Role Information Technology Essay Answer: Cache (prominent and pronounced as cash) memory is enormously and extremely fast memory that is built into a computers central processing unit (CPU) or located next to it on a separate chip. The CPU uses cache memory to store instructions that are repeatedly required to run programs, improving overall system speed. It helps CPU to accessing for frequently or recently accessed data. C:UsersraushanPicturespage36-1.jpg References: http://www.wisegeek.com/what-is-cache-memory.htm Reason for Cache Memory: There are various reasons for using Cache in the computer some of the reason is mentioning following. The RAM is comparatively very slow as compared to System CPU and it is also far from the CPU (connected through Bus), so there is need to add another small size memory which is very near to the CPU and also very fast so that the CPU will not remain in deadlock mode while it waiting resources from main memory. this memory is known as Cache memory. This is also a RAM but is very high speed as compare to Primary memory i.e. RAM. In Speed CPU works in femto or nano seconds the distance also plays a major role in case of performance. Cache memory is designed to supply the CPU with the most frequently requested data and instructions. Because retrieving data from cache takes a fraction of the time that it takes to access it from main memory, having cache memory can save a lot of time. Whenever we work on more than one application. This cache memory is use to keep control and locate the running application within fraction of nano seconds. It enhances performance capability of the system. Cache memory directly communicates with the processor. It is used preventing mismatch between processor and memory while switching from one application two another instantaneously whenever needed by user. It keeps track of all currently working applications and their currently used resources. For example, a web browser stores newly visited web pages in a cache directory, so that we can return promptly to the page without requesting it from the original server. When we strike the Reload button, browser compares the cached page with the current page out on the network, and updates our local version if required. References: 1. http://www.kingston.com/tools/umg/umg03.asp 2. http://www.kingston.com/frroot/tools/umg/umg03.asp 3. http://ask.yahoo.com/19990329.html How Cache Works? Answer: The cache is programmed (in hardware) to hold recently-accessed memory locations in case they are needed again. So, each of these instructions will be saved in the cache after being loaded from memory the first time. The next time the processor wants to use the same instruction, it will check the cache first, see that the instruction it needs is there, and load it from cache instead of going to the slower system RAM. The number of instructions that can be buffered this way is a function of the size and design of the cache. The details of how cache memory works vary depending on the different cache controllers and processors, so I wont describe the exact details. In general, though, cache memory works by attempting to predict which memory the processor is going to need next, and loading that memory before the processor needs it, and saving the results after the processor is done with it. Whenever the byte at a given memory address is needed to be read, the processor attempts to get the data from the cache memory. If the cache doesnt have that data, the processor is halted while it is loaded from main memory into the cache. At that time memory around the required data is also loaded into the cache. When data is loaded from main memory to the cache, it will have to replace something that is already in the cache. So, when this happens, the cache determines if the memory that is going to be replaced has changed. If it has, it first saves the changes to main memory, and then loads the new data. The cache sys tem doesnt worry about data structures at all, but rather whether a given address in main memory is in the cache or not. In fact, if you are familiar with virtual memory where the hard drive is used to make it appear like a computer has more RAM than it really does, the cache memory is similar. Lets take a library as an example o how caching works. Imagine a large library but with only one librarian (the standard one CPU setup). The first person comes into the library and asks for A CSA book (By IRV Englander). The librarian goes off follows the path to the bookshelves (Memory Bus) retrieves the book and gives it to the person. The book is returned to the library once its finished with. Now without cache the book would be returned to the shelf. When the next person arrives and asks for CSA book (By IRV Englander), the same process happens and takes the same amount of time. Cache memory is like a hot list of instructions needed by the CPU. The memory manager saves in cache each instruction the CPU needs; each time the CPU gets an instruction it needs from cache that instruction moves to the top of the hot list. When cache is filled and the CPU calls for a new instruction, the system overwrites the data in cache that hasnt been used for the longest period of time. This way, the high priority information thats used continuously stays in cache, while the less frequently used information drops out after an Interval. Its similar to when u access a program frequently the program is listed on the start menu here need not have to find the program from the list on all programs u simply open the start menu and click on the program listed there, doesnt this saves Your time. Working of cache Pentium 4: Pentium 4: L1 cache (8k bytes, 64 byte lines, Four ways set associative) L2 cache (256k,128 byte lines,8 way set associative) References: http://computer.howstuffworks.com/cache.htm http://www.kingston.com/tools/umg/umg03.asp http://www.zak.ict.pwr.wroc.pl/nikodem/ak_materialy/Cache%20organization%20by%20Stallings.pdf Levels of Cache Level 1 Cache (L1): The Level 1 cache, or primary cache, is on the CPU and is used for temporary storage of instructions and data organised in blocks of 32 bytes. Primary cache is the fastest form of storage. Because its built in to the chip with a zero wait-state (delay) interface to the processors execution unit, it is limited in size. Level 1 cache is implemented using Static RAM (SRAM) and until recently was traditionally 16KB in size. SRAM uses two transistors per bit and can hold data without external assistance, for as long as power is supplied to the circuit. The second transistor controls the output of the first: a circuit known as a flip-flop so-called because it has two stable states which it can flip between. This is contrasted to dynamic RAM (DRAM), which must be refreshed many times per second in order to hold its data contents. Intels P55 MMX processor, launched at the start of 1997, was noteworthy for the increase in size of its Level 1 cache to 32KB. The AMD K6 and Cyrix M2 chips launched later that year upped the ante further by providing Level 1 caches of 64KB. 64Kb has remained the standard L1 cache size, though various multiple-core processors may utilise it differently. For all L1 cache designs the control logic of the primary cache keeps the most frequently used data and code in the cache and updates external memory only when the CPU hands over control to other bus masters, or during direct memory access by peripherals such as optical drives and sound cards. http://www.pctechguide.com/14Memory_L1_cache.htm ever_s1 Level 2 Cache (L2): Most PCs are offered with a Level 2 cache to bridge the processor/memory performance gap. Level 2 cache also referred to as secondary cache) uses the same control logic as Level 1 cache and is also implemented in SRAM. Level 2 caches typically comes in two sizes, 256KB or 512KB, and can be found, or soldered onto the motherboard, in a Card Edge Low Profile (CELP) socket or, more recently, on a COAST module. The latter resembles a SIMM but is a little shorter and plugs into a COAST socket, which is normally located close to the processor and resembles a PCI expansion slot. The aim of the Level 2 cache is to supply stored information to the processor without any delay (wait-state). For this purpose, the bus interface of the processor has a special transfer protocol called burst mode. A burst cycle consists of four data transfers where only the addresses of the first 64 are output on the address bus. The most common Level 2 cache is synchronous pipeline burst. To have a synchronous cache a chipset, such as Triton, is required to support it. It can provide a 3-5% increase in PC performance because it is timed to a clock cycle. This is achieved by use of specialised SRAM technology which has been develo ped to allow zero wait-state access for consecutive burst read cycles. There is also asynchronous cache, which is cheaper and slower because it isnt timed to a clock cycle. With asynchronous SRAM, available in speeds between 12 and 20ns, (http://www.pctechguide.com/14Memory_L2_cache.htm) 976 http://www.karbosguide.com/books/pcarchitecture/images/976.png (picture) L3 cache Level 3 cache is something of a luxury item. Often only high end workstations and servers need L3 cache. Currently for consumers only the Pentium 4 Extreme Edition even features L3 cache. L3 has been both on-die, meaning part of the CPU or external meaning mounted near the CPU on the motherboard. It comes in many sizes and speeds. The point of cache is to keep the processor pipeline fed with data. CPU cores are typically the fastest part in the computer. As a result cache is used to pre-read or store frequently used instructions and data for quick access. Cache acts as a high speed buffer memory to more quickly provide the CPU with data. So, the concept of CPU cache leveling is one of performance optimization for the processor. http://www.extremetech.com/article2/0,2845,1517372,00.asp The image below shows the complete cache hierarchy of the Shanghai processor. Barcelona also has a similar hierarchy except that it only has 2MB of L3 cache. L3_Cache_Architecture http://developer.amd.com/PublishingImages/L3_Cache_Architecture.jpg (picture) Cache Memory Organisation In a modern microprocessor several caches are found. They not only vary in size and functionality, but also their internal organization is typically different across the caches. Instruction Cache The instruction cache is used to store instructions. This helps to reduce the cost of going to memory to fetch instructions. The instruction cache regularly holds several other things, like branch prediction information. In certain cases, this cache can even perform some limited operation(s). The instruction cache on UltraSPARC, for example, also pre-decodes the incoming instruction. Data Cache A data cache is a fast buffer that contains the application data. Before the processor can operate on the data, it must be loaded from memory into the data cache. The element needed is then loaded from the cache line into a register and the instruction using this value can operate on it. The resultant value of the instruction is also stored in a register. The register contents are then stored back into the data cache. Eventually the cache line that this element is part of is copied back into the main memory. In some cases, the cache can be bypassed and data is stored into the registers directly. TLB Cache Translating a virtual page address to a valid physical address is rather costly. The TLB is a cache to store these translated addresses. Each entry in the TLB maps to an entire virtual memory page. The CPU can only operate on data and instructions that are mapped into the TLB. If this mapping is not present, the system has to re-create it, which is a relatively costly operation. The larger a page, the more effective capacity the TLB has. If an application does not make good use of the TLB (for example, random memory access) increasing the size of the page can be beneficial for performance, allowing for a bigger part of the address space to be mapped into the TLB. Some microprocessors, including UltraSPARC, implement two TLBs. One for pages containing instructions (I-TLB) and one for data pages (D-TLB). An Example of a typical cache organization is shown below: Cache Memory Principles à ¢Ã¢â€š ¬Ã‚ ¢ Small amount of fast memory à ¢Ã¢â€š ¬Ã‚ ¢ Placed between the processor and main memory à ¢Ã¢â€š ¬Ã‚ ¢ Located either on the processor chip or on a separate module Cache Operation Overview Processor requests the contents of some memory location The cache is checked for the requested data If found, the requested word is delivered to the processor If not found, a block of main memory is first read into the cache, then therequested word is delivered to the processor When a block of data is fetched into the cache to satisfy a single memory reference, it is likely that there will be future references to that same memory location or to other words in the block locality or reference rule. Each block has a tag added to recognize it. Mapping Function An algorithm is needed to map main memory blocks into cache lines. A method is needed to determine which main memory block occupies a cache line. There are three techniques used: Direct Fully Associative Set Associative Direct Mapping: Direct mapped is a simple and efficient organization. The (virtual or physical) memory address of the incoming cache line controls which cache location is going to be used. Implementing this organization is straightforward and is relatively easy to make it scale with the processor clock. In a direct mapped organization, the replacement policy is built-in because cache line replacement is controlled by the (virtual or physical) memory address. Direct mapping assigned each memory block to a specific line in the cache. If a line is all ready taken up by a memory block when a new block needs to be loaded, the old block is trashed. The figure below shows how multiple blocks are mapped to the same line in the cache. This line is the only line that each of these blocks can be sent to. In the case of this figure, there are 8 bits in the block identification portion of the memory address. Consider a simple example-a 4-kilobyte cache with a line size of 32 bytes direct mapped on virtual addresses. Thus each load/store to cache moves 32 bytes. If one variable of type float takes 4 bytes on our system, each cache line will hold eight (32/4=8) such variables. http://csciwww.etsu.edu/tarnoff/labs4717/x86_sim/images/direct.gif The address for this broken down something like the following: Tag 8 bits identifying line in cache word id bits Direct mapping is simple and inexpensive to implement, but if a program accesses 2 blocks that map to the same line repeatedly, the cache begins to thrash back and forth reloading the line over and over again meaning misses are very high. Fully Associative: The fully associative cache design solves the potential problem of thrashing with a direct-mapped cache. The replacement policy is no longer a function of the memory address, but considers usage instead. With this design, typically the oldest cache line is evicted from the cache. This policy is called least recently used (LRU). In the previous example, LRU prevents the cache lines of a and b from being moved out prematurely. The downside of a fully associative design is cost. Additional logic is required to track usage of lines. The larger the cache size, the higher the cost. Therefore, it is difficult to scale this technology to very large (data) caches. Luckily, a good alternative exists. The address is broken into two parts: a tag used to identify which block is stored in which line of the cache (s bits) and a fixed number of LSB bits identifying the word within the blocks.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Tag  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   word id bits Set Associative: Set associative addresses the problem of possible thrashing in the direct mapping method. It does this by saying that instead of having exactly one line that a block can map to in the cache, we will group a few lines together creating a set. Then a block in memory can map to any one of the lines of a specific set. There is still only one set that the block can map to.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Tag  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   word id bits

Monday, January 20, 2020

Media Manipulation Essay -- essays research papers

The media and advertising hinder do indeed hinder our being fully human. Mass media including radio, television and newspapers endeavors to shape public opinion on a variety of things. The media attempts to manipulate those values instilled by parents and society in general, thus taking away from our being human. Messages designed to influence peoples’ attitudes, desires and decisions fall upon society urging those people to buy a certain product, vote for a certain political figure, or support a â€Å"worthy† cause. The daily attack of media and advertising persuade the public to be one and the same, rather than allowing them to function as humans who follow their own beliefs.   Ã‚  Ã‚  Ã‚  Ã‚  Public opinion is formed through media propaganda. The network of communication systems – radio, magazines, newspaper, television, and films – informs those exposed as to their roles in society and their culture. Advertising has but one purpose: to sell a product or service or to promote a political figure by any and all means necessary including brainwashing the general public. Companies try to make the consumer aware of its product and convince the world that its product is better than that of the competitor as seen with the war between McDonalds and Burger King restaurants. This misuse of triggering the subconscious minds induces the public to buy things without knowing they have been deceived.   Ã‚  Ã‚  Ã‚  Ã‚  Parents have the heaviest influence in shaping on...

Sunday, January 12, 2020

Ontario’s Nuclear Plants

Ontario†s nuclear power planets are damaging our environment and economic structure; nuclear power should be shut down and replaced with safer methods of power making. Ontario†s nuclear power is not the safe and clean way to produce power, Ontario†s nuclear plants are becoming outdated, nuclear waste is building up, and contamination is becoming more of a threat. Ontario thought that nuclear was clean, safe, and cheap way to produce power. During the 1950s, Ontario Hydro was looking for new sources of electricity to meet the growing demand. In 1954, a partnership was formed between Atomic Energy of Canada Limited (AECL), Ontario Hydro and Canadian General Electric to build Canada†s first nuclear power plant called NPD for Nuclear Power Demonstration. In 1962, NPD began supplying the province of Ontario with its first nuclear generated electricity. Ontario had found it†s new source of electricity, and they were not fully aware consequences that would happen after many years of use. Power projects (later AECL CANDU), based in Toronto. Ontario and Montreal, Quebec became responsible for implementing AECL†s nuclear power program and marketing CANDU reactors. Nuclear power was cheap, if you did not have to worry about the waste. This was the answer to Ontario†s power problems, so they invested in the newest source of power at the time. Most people believed that nuclear power was a good change in Ontario†s power structure, and there would be no real problems in the future. Ontario needed a new source of power in the 1950s; they found it in nuclear power and it solved the problem. In the 1950s the average person did not have a lot of knowledge about nuclear energy, and nuclear studies were being held. All people really knew was the positive side of things, the government and research body†s made videos that would try to describe nuclear energy to the public. The videos would talk about how great nuclear power and how abundant nuclear energy was. Making it sound like the answer to all our electric needs. The government and research body kind of jumped around the subject of nuclear waste, and the effects it could have on a human or the environment. The real truths about nuclear energy was not as widely known, and the majority of the people thought that nuclear energy was a positive step in the right direction. Ontario has a huge problem with the build up of nuclear waste, and this waste could have a huge impact on our environment if something were to go wrong. Radioactive mops, rags, clothing, tools, and contaminated equipment such as filters and pressure tubes, are temporality stored in shallow underground containers at the Bruce Nuclear Complex and elsewhere. At Bruce, a radwaste incinerator reduces the volume of combustible radioactive waste materials. In 1975, St. Mary's School in Port Hope was evacuated because of high radiation levels in the cafeteria. It was soon learned that large volumes of radioactive wastes from uranium refining operations had been used as construction material in the school and all over town. Hundreds of homes were contaminated. There are 200 million tons of sand-like uranium tailings in Canada, mostly in Ontario and Saskatchewan. These radioactive wastes will remain hazardous for hundreds of thousands of years. They contain some of the most powerful carcinogens known: radium, radon gas, polonium, thorium and others. Radio-active tailings also result from phosphate ores and other ores rich in uranium. In 1978, an Ontario Royal Commission recommended that a panel of world class ecologists study the long-term problem of radioactive tailings and that the future of nuclear power be assessed in view of their findings. The government has ignored these recommendations. Nuclear waste is biodegradable, but it takes it takes hundreds of thousands of years to do so, which could leave unimaginable results in the future. Lately Ontario†s nuclear power plants have been going threw horrible management, out dated equipment, and nuclear waste build up; resulting in economic breakdown. Ontario†s nuclear plants have not had their equipment greatly updated, which is a big problem that could be costly to fix. When calculated in real 1998 dollars, total federal subsidies to Atomic Energy of Canada Limited (AECL) for the last 46 years amount to $15. 8 billion. It should be noted that $15. 8 billion is a real cash subsidy to AECL, and does not include any opportunity cost? What the subsidies would have been worth if the government had invested in more cost competitive ventures. At a rate of 15%, the opportunity cost of government subsidies to AECL is $202 billion. There is also federal financial support for other nuclear activities in progress or impending, including: the Whiteshell Laboratories privatization ($23. 1 million); the MAPLE reactors at Chalk River Laboratories ($120 million); the Canadian Neutron Facility ($400 million); radioactive waste management and decommissioning ($665 million); and reactor exports ($2. 5 billion considered). In Ontario the bad management and the old equipment has lead to major change in the way the plats work. Also this will cost billions of dollars to do. In the long run Ontario†s nuclear do not make the money needed to stay open, with the costs of fixing them and reforming them it would just cost to much, so there is no point in doing so. Ontario has purposed to close down all of there nuclear plants, but they decided that is would be better to keep most of them open. There are much more safer, cleaner, and cheaper ways of producing power. We could invest solar, wind or tide power sources, all of which are safe. Leaving these nuclear plants open is like trying to heal your cut with a knife. Ontario and its people don†t realize that with the build up of nuclear waste, we could be looking into major crises. Many of Ontario Hydro†s problems are monetary in origin. The corporation has had difficulty maintaining its nuclear facilities in accordance with the Atomic Energy Control Board†s safety requirements. Hydro†s restructuring efforts reflect past negligence in preventive, minor, and responsive maintenance. It is now faced with a situation wherein the demand for energy must be met through the means of an increasingly limited resource . . . money. In response to this problem, the energy formerly supplied through nuclear power is being replaced primarily with coal-driven electrical generation. Hydro has implemented a short-term, quick fix solution based on the same practices and assumptions, which originally lead to the failure of Ontario†s nuclear energy program. As of now Ontario stands by its nuclear power and they do not have any current plans to shut down or totally reform these plants. Ontario†s nuclear power plants are a Danger to our environment, the economy, and a danger to the people, us; we should shut down all of these plants and replace them with safer methods.