blog




  • Essay / Computer Systems and Architecture

    “Describe how concepts such as RISC, pipeline, cache, and virtual memory have evolved over the past 25 years to improve system performance?” » Say no to plagiarism. Get a tailor-made essay on “Why Violent Video Games Should Not Be Banned”? Get an original essay Processor design and its integration into computers of varying sizes and purposes is naturally a major goal of the electronics industry. More calculations done mean larger infrastructure systems that can be connected together, whether in an online marketing/sales network or a series of sensors and gauges reporting to a central management processor. Concepts such as RISC, pipeline, cache and virtual memories add up to create ever faster, more complex and denser computer control systems. Reduced Instruction Set Computing (RISC) architecture initially evolved due to a growing change in the very way computing was built: namely, the increasing ease of access to cheaper, higher quality memory. RISC works through the use of a much smaller, but more focused, instruction set than its predecessor CISC, relying on more command executions that execute individually faster than fair CISC, more verbose but longer in execution. The reduced number of hardwired execution sets allowed for a higher number of general-purpose registers and cache, which allowed a larger stream of accessed memory. Demonstrating that markets maintain momentum, the RISC architecture did not gain much traction outside of academia until federal university research grants funded projects requiring their use. The only mainstream PC sold in the PC market was the PowerPC jointly developed by IBM, Apple, and Motorola in 1991. However, outside of the PC market, the use of RISC has accelerated in the use of microprocessors in embedded systems and other non-PC hardware requiring simple executions, but near real-time results. Berkeley developed its first version of RISC in 1981, with RISC-II becoming available a few years later. Sun Systems eventually created its Scalable Processor Architecture (SPARC) in 1987, which continues to sell to this day. SPARC took the idea of ​​minimalism to the extreme, even to the point of removing multiplication and division from the execution list, in order to maximize access to registers. The scalable element of SPARC derives from SPARC's ability to be used in anything from a single microprocessor to a multi-thousand processor server hub, all using the same set of register commands. Until recently, the key driver of processor design was the ever-increasing speed of processors. This has slowed down due to Dennard Scaling concepts, or the fact that increasing processor speeds combined with increasing transistor counts means more heat/power. As a result, parallel processing has become a priority, and architecture designs such as EPIC (Explicitly Parallel Instruction Computing) based on VLIW by Intel, where processing focuses as much on execution speed as capacity to complete several sets. executions simultaneously. As it stands, EPIC may well make RISC obsolete, or even CISC (which is widely used in PC-focused systems). Just as processors as a whole began to focus on parallelism in their usage, theyalso did this in their design. Pipelining is the term used for the process of taking execution calls and dividing them into concurrently executed "pipelines" that process the elements of a call in sequence, allowing a "pipe" to complete its task, like recovery, to transmit it to another. pipe to start another recovery. An example of a traditional RISC pipeline consists of a channel for instruction fetch, register decoding and fetch, execution, memory access, and register rewrite. A pipeline is considered "super" when its depth is increased to a point where stepper circuits are simplified and clock frequencies can be significantly increased. When a pipeline is capable of executing a fetch instruction every instruction cycle, then the pipeline is considered fully pipelined. Due to the considerable differences in timing that can exist between one required execution cycle and another, fully pipelined processors are difficult to produce. In order to solve this problem, there is a dynamic pipeline where a processor recognizes deadlocks, either due to failing branches, memory call deadlocks, or excessively deep recursion. This allows the processor to reassign a channel to another process or thread to maintain processor productivity. The IBM 360/91 was one of the first major processing systems to make extensive use of pipeline. Despite this added functionality, this particular model only sold 20 models, mostly to NASA. Ultimately, the pipeline leads to the realm of hyper/multithreading in the growing field of ILP (Instruction Level Parallelism), or parallel programming as mentioned above about EPIC. In 1965, Maurice Wilkes created the idea of ​​slave memory, or what would eventually be called cache memory. This was a separate memory from main memory that held instructions and data that were often called up for faster processing. As with all developments in the computing world, things have changed a lot since it was first proposed. Now, cache most often exists in three different forms or levels. Level 1 cache (also called L1) is a form of memory located directly on the processor chip to access instructions very quickly, typically containing between 8 and 128 KB of memory. The L2 cache, which can exist both on or off the immediate chip, typically contains between 512 KB and 16 MB of memory. It has greater volume but is a bit slower than L1. Finally, residing entirely off-chip, L3 cache exists entirely separately from the motherboard processor, and typically contains up to 8MB of memory and is even slower than L2 cache, but is used to enhance L1 and L2 . Cache is important because it can be accessed much faster than RAM or ROM and minimizes pipeline stalls. Cache memory is typically organized according to processor registers with direct mapping, fully associative mapping, and defined associative mapping. Modern processors such as the various recent ranges from Intel include the 3 levels of cache directly on the chip. However, cache is expensive to produce, and the main driver of its progress is the technology to produce it, namely memory size and density. Virtual memory is an idea that was first put forward by the University's Atlas team. from Manchester in 1959, which consisted of the idea of ​​taking parts of main memory (or RAM as it is now called) and copying them to a physical disk for access.