CA Assignment 2025 Assignment1 FullAnswers
CA Assignment 2025 Assignment1 FullAnswers
(Assignment-1)
Set-I
2. Instruction Length: CISC instructions vary in length; RISC instructions are fixed length
(typically 32-bit).
4. Execution Cycles: CISC may take multiple cycles per instruction; RISC is designed for
single-cycle execution.
5. Register Count: CISC uses fewer registers; RISC architectures provide many general-
purpose registers.
6. Addressing Modes: CISC supports numerous modes; RISC limits to simple load/store
addressing.
7. Memory Access: CISC allows multiple memory accesses per instruction; RISC employs
load/store architecture.
8. Pipeline Efficiency: RISC pipelines are deeper and simpler; CISC pipelines are shorter
with complex stages.
9. Compiler Role: RISC relies heavily on compiler optimizations; CISC offloads complexity to
hardware.
10. Code Density: CISC instructions pack more work per instruction, often yielding denser
code.
12. Power Consumption: RISC’s simplicity leads to lower power use, beneficial in embedded
systems.
13. Hardware Complexity: CISC control units are more complex; RISC simplifies the control
path.
14. Implementation Cost: RISC design is cheaper to implement in silicon due to simplicity.
15. Legacy Support: CISC maintains backward compatibility with older codebases.
16. Scalability: RISC architectures scale well with increasing clock speeds.
18. Examples: x86 is historically CISC; ARM and MIPS are classic RISC.
19. Modern Convergence: Many modern CISC chips internally translate to RISC-like micro-
ops.
20. Summary: CISC optimizes code density and legacy support; RISC focuses on speed and
simplicity.
7. Integer overflow wraps around via modulo arithmetic; floating-point overflow leads to
infinities.
11. Fixed-point arithmetic is faster and simpler but less flexible than floating.
14. SIMD extensions (SSE, AVX) apply vectorized integer and floating operations.
15. Sign extension and zero extension handle converting between integer sizes.
16. Floating-point exceptions: NaN, overflow, underflow, invalid operations.
18. Mixed-type arithmetic requires type promotion and potential conversion overhead.
20. Choosing between fixed and floating involves trade-offs of speed, precision, and range.
2. Cache levels (L1, L2, L3) bridge gap between CPU and main memory latency.
4. Cache associativity, line size, and replacement policy impact hit rates.
8. TLB (Translation Lookaside Buffer) caches page table entries for speed.
10. Optical disks (CD/DVD) are slower with higher access times and lower bandwidth.
11. SSDs replace HDDs for faster random access and higher throughput.
20. Holistic tuning across cache, RAM, and storage yields best overall throughput.
4. Organization and utilization of system memory and cache memory: types and
organization.
1. System memory (DRAM) organized in rows, columns, banks, and channels.
8. Cache lines store contiguous bytes; line size affects spatial locality.
10. Write-back uses dirty bits to mark modified lines for later write-back.
14. Cache locking and partitioning optimize real-time and multi-core workloads.
17. ECC DIMMs detect and correct single-bit errors in main memory.
20. Proper balance of cache size, associativity, and policies maximizes efficiency.