System software pdf download






















Using this technique a context switcher enables multiple processes to share a single CPU. Context switching is an essential part of a multitasking operating system features. When the scheduler switches the CPU from executing one process to execute another, the context switcher saves the content of all processor registers for the process being removed from the CPU, in its process descriptor.

The context of a process is represented in the process control block of a process. Context switch time is pure overhead. Context switching can significantly affect performance as modern computers have a lot of general and status registers to be saved. Content switching times are highly dependent on hardware support. Context switching Some hardware systems employ two or more sets of processor registers to reduce the amount of context switching time. When the process is switched, the following information is stored.

Process with highest priority is to be executed first and so on. Process is preempted and other process executes for given time period. Dining Philosophers Problem The scenario involves five philosophers sitting at a round table with a bowl of food and five chopsticks. Each chopstick sits between two adjacent philosophers.

The philosophers are allowed to think and eat. Since two chopsticks are required for each philosopher to eat, and only five chopsticks exist at the table, no two adjacent philosophers may be eating at the same time. A scheduling problem arises as to who gets to eat at what time. This problem is similar to the problem of scheduling processes that require a limited number of resources Problems The problem was designed to illustrate the challenges of avoiding deadlock, a system state in which no progress is possible.

This attempted solution fails because it allows the system to reach a deadlock state, in which no progress is possible. This is a state in which each philosopher has picked up the fork to the left, and is waiting for the fork to the right to become available. What is Thread? A thread is a flow of execution through the process code, with its own program counter, system registers and stack. A thread is also called a light weight process.

Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead thread is equivalent to a classical process. Each thread belongs to exactly one process and no thread can exist outside a process.

Each thread represents a separate flow of control. Threads have been successfully used in implementing network servers and web server. They also provide a suitable foundation for parallel execution of applications on shared memory multiprocessors.

Following figure shows the working of the single and multithreaded processes. Process Thread Process is heavy weight or Thread is light weight taking lesser 1 resource intensive. Process switching needs Thread switching does not need to 1 interaction with operating system. In multiple processing environments each process All threads can share same set of open 1 executes the same code but has its files, child processes. If one process is blocked then no While one thread is blocked and other process can execute until the waiting, second thread in the same task 1 first process is unblocked.

Multiple processes without using Multiple threaded processes use 1 threads use more resources. In multiple processes each process One thread can read, write or change 1 operates independently of the another thread's data.

The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts. The application begins with a single thread and begins running in that thread. Kernel Level Threads In this case, thread management done by the Kernel. There is no thread management code in the application area. Kernel threads are supported directly by the operating system.

Any application can be programmed to be multithreaded. All of the threads within an application are supported within a single process. Scheduling by the Kernel is done on a thread basis.

The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads are generally slower to create and manage than the user threads.

Some operating system provides a combined user level thread and Kernel level thread facility. Solaris is a good example of this combined approach. Many to Many Model In this model, many user level threads multiplexes to the Kernel thread of smaller or equal numbers. The number of Kernel threads may be specific to either a particular application or a particular machine. Many to One Model Many to one model maps many user level threads to one Kernel level thread.

Thread management is done in user space. When thread makes a blocking system call, the entire process will be blocks. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors.

If the user level thread libraries are implemented in the operating system in such a way that system does not support them then Kernel threads use the many to one relationship modes. One to One Model There is one to one relationship of user level thread to the kernel level thread.

This model provides more concurrency than the many to one model. It also another thread to run when a thread makes a blocking system call. It support multiple thread to execute in parallel on microprocessors. Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. Implementation is by a thread Operating system supports creation of 2 library at the user level. Kernel threads. User level thread is generic and Kernel level thread is specific to the 3 can run on any operating system.

Multi-threaded application cannot Kernel routines themselves can 4 take advantage of multiprocessing. Race Condition? A race condition is an undesirable situation that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence to be done correctly.

A race condition occurs when two threads access a shared variable at the same time. The first thread reads the variable, and the second thread reads the same value from the variable.

Then the first thread and second thread perform their operations on the value, and they race to see which thread can write the value last to the shared variable.

The value of the thread that writes its value last is preserved, because the thread is writing over the value that the previous thread wrote. Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free.

It checks how much memory is to be allocated to processes. It decides which process will get memory at what time.

It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status. Memory management provides protection by using two registers, a base register and a limit register. The base register holds the smallest legal physical memory address and the limit register specifies the size of the range. For example, if the base register holds and the limit register is , then the program can legally access all addresses from through All routines are kept on disk in a re-locatable load format.

The main program is loaded into memory and is executed. Other routines methods or modules are loaded on request. Dynamic loading makes better memory space utilization and unused routines are never loaded. Operating system can link system level libraries to a program.

When it combines the libraries at load time, the linking is called static linking and when this linking is done at the time of execution, it is called as dynamic linking. In static linking, libraries linked at compile time, so program code size becomes bigger whereas in dynamic linking libraries linked at execution time so program code size remains smaller. Swapping Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a backing store, and then brought back into memory for continued execution.

Backing store is a usually a hard disk drive or any other secondary storage which fast in access and large enough to accommodate copies of all memory images for all users.

It must be capable of providing direct access to these memory images. Operating system uses the following memory allocation mechanism. N Memory Description. Allocation In this type of allocation, relocation-register scheme is used to protect Single-partition user processes from each other, and from changing operating-system allocation code and data.

Relocation register contains value of smallest physical 1 address whereas limit register contains range of logical addresses. Each logical address must be less than the limit register. In this type of allocation, main memory is divided into a number of Multiple- fixed-sized partitions where each partition should contain only one partition process.

When a partition is free, a process is selected from the input 2 allocation queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process. Fragmentation As processes are loaded and removed from memory, the free memory space is broken into little pieces.

It happens after sometimes that processes cannot be allocated to memory blocks considering their small size and memory blocks remains unused. This problem is known as Fragmentation. Fragmentation is of two types S. Fragmentation Description External Total memory space is enough to satisfy a request or to reside a 1 fragmentation process in it, but it is not contiguous so it cannot be used. Some portion of fragmentation memory is left unused as it cannot be used by another process.

External fragmentation can be reduced by compaction or shuffle memory contents to place all free memory together in one large block. External fragmentation is avoided by using paging technique. Paging is a technique in which physical memory is broken into blocks of the same size called pages size is power of 2, between bytes and bytes. When a process is to be executed, it's corresponding pages are loaded into any available memory frames.

Logical address space of a process can be non-contiguous and a process is allocated physical memory whenever the free memory frame is available. Operating system keeps track of all free frames. Operating system needs n free frames to run a program of size n pages. Segmentation Segmentation is a technique to break memory into logical pieces where each piece represents a group of related information. For example, data segments or code segment for each process, data segment for operating system and so on.

Segmentation can be implemented using or without using paging. Speed differences between two devices. A slow device may write data into a buffer, and when the buffer is full, the entire buffer is sent to the fast device all at once. So that the slow device still has somewhere to write while this is going on, a second buffer is used, and the two buffers alternate as each becomes full.

This is known asdouble buffering. Double buffering is often used in animated graphics, so that one screen image can be generated in a buffer while the other completed buffer is displayed on the screen.

This prevents the user from ever seeing any half-finished screen images. Data transfer size differences. Buffers are used in particular in networking systems to break messages up into smaller packets for transfer, and then for re-assembly at the receiving side.

To support copy semantics. For example, when an application makes a request for a disk write, the data is copied from the user's memory area into a kernel buffer. Now the application can change their copy of the data, but the data which eventually gets written out to disk is the version of the data at the time the write request was made.

VirtualMemory This section describes concepts of virtual memory, demand paging and various page replacement algorithms. Virtual memory is a technique that allows the execution of processes which are not completely available in memory. The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual memory is the separation of user logical memory from physical memory. This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available.

Following are the situations, when entire program is not required to be loaded fully in main memory. Virtual memory is commonly implemented by demand paging. It can also be implemented in a segmentation system. Demand segmentation can also be used to provide virtual memory. Virtual memory algorithms Page replacement algorithms Page replacement algorithms are the techniques using which Operating System decides which memory pages to swap out, write to disk when a page of memory needs to be allocated.

Paging happens whenever a page fault occurs and a free page cannot be used for allocation purpose accounting to reason that pages are not available or the number of free pages is lower than required pages.

This process determines the quality of the page replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm. A page replacement algorithm looks at the limited information about accessing the pages provided by hardware, and tries to select which pages should be replaced to minimize the total number of page misses, while balancing it with the costs of primary storage and processor time of the algorithm itself.

There are many different page replacement algorithms. We evaluate an algorithm by running it on a particular string of memory reference and computing the number of page faults. Reference String The string of memory references is called reference string. Reference strings are generated artificially or by tracing a given system and recording the address of each memory reference. The latter choice produces a large number of data, where we note two things.

A translation look-aside buffer TLB : A translation lookaside buffer TLB is a memory cache that stores recent translations of virtual memory to physical addresses for faster retrieval. When a virtual memory address is referenced by a program, the search starts in the CPU. First, instruction caches are checked. At this point, TLB is checked for a quick reference to the location in physical memory.

When an address is searched in the TLB and not found, the physical memory must be searched with a memory page crawl operation. As virtual memory addresses are translated, values referenced are added to TLB. TLBs also add the support required for multi-user computers to keep memory separate, by having a user and a supervisor mode as well as using permissions on read and write bits to enable sharing. TLBs can suffer performance issues from multitasking and code errors.

This performance degradation is called a cache thrash. Cache thrash is caused by an ongoing computer activity that fails to progress due to excessive use of resources or conflicts in the caching system.

Use the time when a page is to be used. OperatingSystemSecurity This section describes various security related aspects like authentication, one time password, threats and security classifications. So a computer system must be protected against unauthorized access, malicious access to system memory, viruses, worms etc. We're going to discuss following topics in this article.

One time passwords provides additional security along with normal authentication. In One- Time Password system, a unique password is required every time user tries to login into the system. Once a one-time password is used then it cannot be used again. One time password are implemented in various ways. System asks for numbers corresponding to few alphabets randomly chosen. System asks for such secret id which is to be generated every time prior to login.

Operating system's processes and kernel do the designated task as instructed. If a user program made these process do malicious tasks then it is known as Program Threats. One of the common examples of program threat is a program installed in a computer which can store and send user credentials via network to some hacker. Following is the list of some well-known program threats. It is harder to detect. A virus is generally a small code embedded in a program. System threats refer to misuse of system services and network connections to put user in trouble.

System threats can be used to launch program threats on a complete network called as program attack. Following is the list of some well-known system threats. A Worm process generates its multiple copies where each copy uses system resources, prevents all other processes to get required resources. Worm processes can even shut down an entire network.

Definition motivates a generic model of language processing activities. We refer to the collection of language processor components engaged in analyzing a source program as the analysis phase of the language processor. Components engaged in synthesizing a target program constitute the synthesis phase. Hardware is just a piece of mechanical device and its functions are being controlled by a compatible software.

Hardware understands instructions in the form of electronic charge, which is the counterpart of binary language in software programming. Binary language has only two alphabets, 0 and 1. To instruct, the hardware codes must be written in binary format, which is simply a series of 1s and 0s. It would be a difficult and cumbersome task for computer programmers to write such codes, which is why we have compilers to write such codes.

Language Processing System We have learnt that any computer system is made of hardware and software. The hardware understands a language, which humans cannot understand. So we write programs in high-level language, which is easier for us to understand and remember. These programs are then fed into a series of tools and OS components to get the desired code that can be used by the machine. This is known as Language Processing System. They may perform the following functions. Macro processing: A preprocessor may allow a user to define macros that are short hands for longer constructs.

File inclusion: A preprocessor may include header files into the program text. Rational preprocessor: these preprocessors augment older languages with more modern flow-of- control and data structuring facilities. As an important part of a compiler is error showing to the programmer. They begin to use a mnemonic symbols for each machine instruction, which they would subsequently translate into machine language.

Such a mnemonic machine language is now called an assembly language. Programs known as assembler were written to automate the translation of assembly language in to machine language. The input to an assembler program is called source program, the output is a machine language translation object program.

What is an assembler? A tool called an assembler translates assembly language into binary instructions. Symbolic names for operations and locations are one facet of this representation. An assembler reads a single assembly language source file and produces an object file containing machine instructions and bookkeeping information that helps combine several object files into a program.

Figure 1 illustrates how a program is built. Most programs consist of several files—also called modules— that are written, compiled, and assembled independently.

A program may also use prewritten routines supplied in a program library. A module typically contains References to subroutines and data defined in other modules and in libraries. The code in a module cannot be executed when it contains unresolved References to labels in other object files or libraries. Another tool, called a linker, combines a collection of object and library files into an executable file , which a computer can run. The Assembler Provides: a. This includes access to the entire instruction set of the machine.

A means for specifying run-time locations of program and data in memory. Provide symbolic labels for the representation of constants and addresses. Perform assemble-time arithmetic. Provide for the use of any synthetic instructions. Emit machine code in a form that can be loaded and executed. Report syntax errors and provide program listings h. Provide an interface to the module linkers and program loader.

Expand programmer defined macro routines. This require more overhead and the process becomes complex While, impure, the source code is subjected to some initial preprocessing before the code is eventually interpreted. The actual analysis overhead is now reduced and the processor speed enabling faithful and efficient interpretation. JAVA also uses interpreter.

The process of interpretation can be carried out in following phases. Lexical analysis 2. Synatx analysis 3. Semantic analysis 4. Direct Execution e Loader and Link-editor: Once the assembler procedures an object program, that program must be placed into memory and executed. The assembler could place the object program directly in memory and transfer control to it, thereby causing the machine language program to be execute.

Also the programmer would have to retranslate his program with each execution, thus wasting translation time. To overcome this problems of wasted translation time and memory. It is also expected that a compiler should make the target code efficient and optimized in terms of time and space.

Compiler design principles provide an in-depth view of translation and optimization process. It includes lexical, syntax, and semantic analysis as front end, and code generation and optimization as back- end. Analysis Phase Known as the front-end of the compiler, the analysis phase of the compiler reads the source program, divides it into core parts and then checks for lexical, grammar and syntax errors.

The analysis phase generates an intermediate representation of the source program and symbol table, which should be fed to the Synthesis phase as input.

Analysis and Synthesis phase of compiler Synthesis Phase Known as the back-end of the compiler, the synthesis phase generates the target program with the help of intermediate source code representation and symbol table. A compiler can have many phases and passes. Pass : A pass refers to the traversal of a compiler through the entire program. Phase : A phase of a compiler is a distinguishable stage, which takes input from the previous stage, processes and yields output that can be used as input for the next stage.

A pass can have more than one phase. A common division into phases is described below. In some compilers, the ordering of phases may differ slightly, some phases may be combined or split into several phases or some extra phases may be inserted between those mentioned below.

Lexical analysis This is the initial part of reading and analysing the program text: The text is read and divided into tokens, each of which corresponds to a sym- bol in the programming language, e. Syntax analysis This phase takes the list of tokens produced by the lexical analysis and arranges these in a tree-structure called the syntax tree that reflects the structure of the program.

This phase is often called parsing. Type checking This phase analyses the syntax tree to determine if the program violates certain consistency requirements, e. Intermediate code generation The program is translated to a simple machine- independent intermediate language. Register allocation The symbolic variable names used in the intermediate code are translated to numbers, each of which corresponds to a register in the target machine code. In terms of programming languages, words are objects like variable names, numbers, keywords etc.

Lexical analysis is the first phase of a compiler. It takes the modified source code from language preprocessors that are written in the form of sentences. The lexical analyzer breaks these syntaxes into a series of tokens, by removing any whitespace or comments in the source code. If the lexical analyzer finds a token invalid, it generates an error.

The lexical analyzer works closely with the syntax analyzer. It reads character streams from the source code, checks for legal tokens, and passes the data to the syntax analyzer when it demands.

Tokens Lexemes are said to be a sequence of characters alphanumeric in a token. There are some predefined rules for every lexeme to be identified as a valid token. These rules are defined by grammar rules, by means of a pattern. A pattern explains what can be a token, and these patterns are defined by means of regular expressions.

Syntax Analysis Introduction Syntax analysis or parsing is the second phase of a compiler. In this chapter, we shall learn the basic concepts used in the construction of a parser. We have seen that a lexical analyzer can identify tokens with the help of regular expressions and pattern rules. But a lexical analyzer cannot check the syntax of a given sentence due to the limitations of the regular expressions.

Regular expressions cannot check balancing tokens, such as parenthesis. Syntax Analyzers A syntax analyzer or parser takes the input from a lexical analyzer in the form of token streams. The parser analyzes the source code token stream against the production rules to detect any errors in the code.

The output of this phase is a parse tree. This way, the parser accomplishes two tasks, i. Parsers are expected to parse the whole code even if some errors exist in the program. Parsers use error recovering strategies, which we will learn later in this chapter. Parse Tree A parse tree is a graphical depiction of a derivation. It is convenient to see how strings are derived from the start symbol. The start symbol of the derivation becomes the root of the parse tree. Let us see this by an example from the last topic.

Types of Parsing Syntax analyzers follow production rules defined by means of context-free grammar. The way the production rules are implemented derivation divides parsing into two types : top-down parsing and bottom-up parsing. Top-down Parsing When the parser starts constructing the parse tree from the start symbol and then tries to transform the start symbol to the input, it is called top-down parsing.

It is called recursive as it uses recursive procedures to process the input. Recursive descent parsing suffers from backtracking. This technique may process the input string more than once to determine the right production. Recursive Descent Parsing Recursive descent is a top-down parsing technique that constructs the parse tree from the top and the input is read from left to right.

It uses procedures for every terminal and non-terminal entity. This parsing technique recursively parses the input to make a parse tree, which may or may not require back-tracking. But the grammar associated with it if not left factored cannot avoid back- tracking.

A form of recursive-descent parsing that does not require any back-tracking is known as predictive parsing. This parsing technique is regarded recursive as it uses context-free grammar which is recursive in nature. Back-tracking Top- down parsers start from the root node start symbol and match the input string against the production rules to replace them if matched. So the top-down parser advances to the next input letter i. It does not match with the next input symbol.

Now the parser matches all the input letters in an ordered manner. The string is accepted. Predictive Parser Predictive parser is a recursive descent parser, which has the capability to predict which production is to be used to replace the input string.

The predictive parser does not suffer from backtracking. To accomplish its tasks, the predictive parser uses a look-ahead pointer, which points to the next input symbols. To make the parser back-tracking free, the predictive parser puts some constraints on the grammar and accepts only a class of grammar known as LL k grammar.

Predictive parsing uses a stack and a parsing table to parse the input and generate a parse tree. The parser refers to the parsing table to take any decision on the input and stack element combination. In recursive descent parsing, the parser may have more than one production to choose from for a single instance of input, whereas in predictive parser, each step has at most one production to choose.

There might be instances where there is no production matching the input string, making the parsing procedure to fail. LL grammar is a subset of context-free grammar but with some restrictions to get the simplified version, in order to achieve easy implementation.

LL grammar can be implemented by means of both algorithms namely, recursive-descent or table- driven. LL parser is denoted as LL k. The first L in LL k is parsing the input from left to right, the second L in LL k stands for left-most derivation and k itself represents the number of look aheads.

Bottom-up Parsing As the name suggests, bottom-up parsing starts with the input symbols and tries to construct the parse tree up to the start symbol. Bottom-up parsing starts from the leaf nodes of a tree and works in upward direction till it reaches the root node. Here, we start from a sentence and then apply production rules in reverse manner in order to reach the start symbol Shift-Reduce Parsing Shift-reduce parsing uses two unique steps for bottom-up parsing.

These steps are known as shift- step and reduce-step. Scan, merge, and save PDF documents using your iPhone. Download the Smallpdf App to your device today and have these handy tools in the palm of your hand. Read more. Android App. Scan, merge, and save PDF documents using your smartphone. Download the Smallpdf App to your Android device today and have these handy tools in the palm of your hand.

Windows App. Read more Download. Not supported as of October 1, Google Chrome Extension. This extension is integrated with Gmail. G Suite App. Just open your stored files through the app to get started. Dropbox App. Simplify the way you manage your documents in Dropbox, with Smallpdf tools to compress, convert, edit, sign, protect, and unlock PDF documents in just a few easy steps. Check out the hundreds of reviews from users detailing how our software has improved their daily lives.

Smallpdf will always take the utmost care in handling your data, online or offline. Once you download Smallpdf, you can freely convert files to and from PDF with a simple drag and drop. Our versatile PDF software also comes with an array of modification functions to compress, split, or merge documents. The basic reader function also enables an easy viewing and printing experience. Quick and Automatic Synchronization. Our desktop software handles files offline, without uploading them to our servers.



0コメント

  • 1000 / 1000