Sem 2 Mc0070 Operating Systems With Unix

  • Uploaded by: Jay Shankar Ray
  • 0
  • 0
  • March 2021
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Sem 2 Mc0070 Operating Systems With Unix as PDF for free.

More details

  • Words: 5,471
  • Pages: 15
Loading documents preview...
Master of Computer Application (MCA) – Semester 2 MC0070 – Operating Systems with Unix Assignment Set – 1

1. Describe the following operating system components: A) Functions of an Operating system :- Functions Of Operating System Today most operating systems perform the following important functions: 1. Processor management, that is, assignment of processor to different tasks being performed by the computer system. 2. Memory management, that is, allocation of main memory and other storage areas to the system programmes as well as user programmes and data. 3. Input/output management, that is, co-ordination and assignment of the different output and input device while one or more programmes are being executed. 4. File management, that is, the storage of file of various storage devices to another. It also allows all files to be easily changed and modified through the use of text editors or some other files manipulation routines. 5. Establishment and enforcement of a priority system. That is, it determines and maintains the order in which jobs are to be executed in the computer system. 6. Automatic transition from job to job as directed by special control statements. 7. Interpretation of commands and instructions. 8. Coordination and assignment of compilers, assemblers, utility programs, and other software to the various user of the computer system. 9. Facilities easy communication between the computer system and the computer operator (human). It also establishes data security and integrity.

B) Operating system components :- The operating system comprises a set of software packages that can be used to manage interactions with the hardware. The following elements are generally included in this set of software: •





The kernel, which represents the operating system's basic functions such as management of memory, processes, files, main inputs/outputs and communication functionalities. The shell, allowing communication with the operating system via a control language, letting the user control the peripherals without knowing the characteristics of the hardware used, management of physical addresses, etc. The file system, allowing files to be recorded in a tree structure.

1

2. Describe the following: A. Micro Kernels :- In computer science, a microkernel is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system. These mechanisms include low-level address space management, thread management, and inter-process communication(I.P.C). As an operating system design approach, microkernels permit typical operating system services, such as device drivers, protocol stacks, file systems and user interface code, to run in user space. If the hardware provides multiple rings or CPU modes, the microkernel is the only software executing at the most privileged level (generally referred to as supervisor or kernel mode). Microkernels are closely related to exokernels.[1] They also have much in common with hypervisors,[2] but the latter make no claim to minimality, and are specialized to supporting virtual machines; indeed, the L4 microkernel frequently finds use in a hypervisor capacity. The historical term nanokernel has been used to distinguish modern, high-performance microkernels from earlier implementations which still contained many system services. However, nanokernels have all but replaced their microkernel progenitors, and the term has fallen into disuse.

B. Modules :- The variables and methods in the os module allow you to interact with files and directories. In most cases the names and functionalities are the same as the equivalent. The os module: operating system interface:environ

A dictionary whose keys are the names of all currently defined environmental variables, and whose values are the values of those variables. error

The exception raised for errors in this module. chdir(p)

2

Change the current working directory to that given by string p chmod(p,m)

Change the permissions for pathname p to m. See module stat, below, for symbolic constants to be used in making up m values. chown(p,u,g)

Change the owner of pathname p to user id u and group id g. _exit(n)

Exit the current process and return status code n. This method should be used only by the child process after a fork(); normally you should use sys.exit(). getcwd()

Returns the current working directory name as a string. kill(p,s)

Send signal s to the process whose process ID is p. listdir(p)

Return a list of the names of the files in the directory whose pathname is p. This list will never contain the special entries "." and ".." for the current and parent directories. The entries may not be in any particular order. mkdir(p[,m])

Create a directory at pathname p. You may optionally specify permissions m; see module stat below for the interpretation of permission values. remove(p)

Removes the file with pathname p, as in the Unix rm command. Raises OSError if it fails. rename(po, pn)

Rename path po to pn. rmdir(p)

Remove the directory at path p. 3

3. Describe the concept of process control in Operating systems. Ans:Process Management Multiprogramming systems explicitly allow multiple processes to exist at any given time, where only one is using the CPU at any given moment, while the remaining processes are performing I/O or are waiting. The process manager is of the four major parts of the operating system. It implements the process abstraction. It does this by creating a model for the way the process uses CPU and any system resources. Much of the complexity of the operating system stems from the need for multiple processes to share the hardware at the same time. As a conseuence of this goal, the process manager implements CPU sharing ( called scheduling ), process synchronization mechanisms, and a deadlock strategy. In addition, the process manager implements part of the operating system's protection and security.

Process States During the lifespan of a process, its execution status may be in one of four states: (associated with each state is usually a queue on which the process resides) • • • • •

Executing: the process is currently running and has control of a CPU Waiting: the process is currently able to run, but must wait until a CPU becomes available Blocked: the process is currently waiting on I/O, either for input to arrive or output to be sent Suspended: the process is currently able to run, but for some reason the OS has not placed the process on the ready queue Ready: the process is in memory, will execute given CPU time

Process Control Block (PCB) If the OS supports multiprogramming, then it needs to keep track of all the processes. For each process, its process control block PCB is used to track the process's execution status, including the following: • • • • • •

Its current processor register contents. Its processor state (if it is blocked or ready). Its memory state. A pointer to its stack. Which resources have been allocated to it. Which resources it needs.

4. Describe the following with respect to UNIX operating System: A) Hardware Management :- One of the first things you do, after successfully plugging together a plethora of cables and components, is turn on your computer. The operating system takes care of all the starting functions that must occur to get your computer to a usable state. Various pieces of hardware need to be initialized. After the start-up procedure is complete, the operating system awaits further instructions. If you shut down the computer, the operating system also has a procedure that makes sure

4

all the hardware is shut down correctly. Before turning your computer off again, you might want to do something useful, which means that one or more applications are executed. Most boot ROMs do some hardware initialization but not much. Initialization of I/O devices is part of the UNIX kernel.

To perform its task, a process may need to access hardware resources. The process may need to read or write to a file, send data to a network card (to communicate with another computer), or send data to a printer. The operating system provides such services for the process. This is referred to as resource allocation. A piece of hardware is a resource, and the operating system allocates available resources to the different processes that are running. B) Unix Architecture :Kernel:Unix is a multitasking, multiuser, operating system, designed from day one to be lean and effective in dealing with real time processing of information. The operating system design essentially embodies effective operating system control over resources (hard disk, tapes, screen, filesystem, etc.), so that as many applications as possible to support over the system can run concurrently, without problem. The architecture of Unix is relatively simple. Unix derives most of its functionality via the Kernel, which is a block of code that supports all interaction with end user applications, shells, etc. Process Structure:Unix systems multitask applications in what is generally regarded as "user" mode, whereas the kernel code itself runs in "kernel" or "privileged" mode. The differences are important - kernel code is customized, via device drivers, for the hardware platform hence, the kernel can take advantage, thru device drivers, of specialized processor functionality that make the multitasking of applications much smoother. Input/Output and Piping:Unix console applications work in a similar fashion as do Windows console applications: they both support keyboard oriented input, and text mode output. Unix supports all standard input/output mechanisms, including C functions like printf(), gets() and so forth. Just like with a Windows console, these standard sources can be redirected. Unix Architecture Diagram :-

5

5. Describe the following: A) Unix Kernel :- The architecture of Unix is relatively simple. Unix derives most of its functionality via the Kernel, which is a block of code that supports all interaction with end user applications, shells, etc. Along with the kernel, is the device drivers, which allow all applications, via the kernel, to interact with the devices that are available within the system. The kernel is designed to be flexible - Unix traditionally is distributed as a package of source code files written in the C programming language. The target system hence either has an existing Unix implementation with a C compiler, which can recompile the latest kernel enhancements to create a new operating system image to execute after a system shutdown and reboot. Recently, however, due to the standardization of the PC platform (i.e.: most PCs have a standard set of devices - floppy, hard disk(s), CDROM, high resolution video adaptor, etc.), you can obtain precompiled Unix releases that embody the majority of features that an end user or multi-user environment requires.

B) Unix Startup Scripts :- In the beginning, there was "init". If you had a Unix system, you had "init" and it was almost certainly process id 1. Process 0 was the swapper (not visible in "ps"); it started init with a fork and, one way or another, init was responsible for starting everything else. There were variations, but not many. BSD systems simply used /etc/rc and /etc/ttys. You'd configure terminals in /etc/ttys and add anything else to /etc/rc. The System V Unixes used the

6

more complex /etc/inittab file to direct init's actions, but really that wasn't much more than the /etc/ttys - life was pretty simple. That didn't take long to change. The Automating Program Startup article describes init and inittab as they worked on SCO Unix and Linux at that time. Linux inittab soon became mostly standardized, though with some confusion of /etc/init.d vs. /etc/rc.d/init.d and Red Hat's use of "chkconfig" (or the GUI "ntsysv" or "serviceconf" ) to control the startup scripts. With any Unix or Linux system that uses inittab, you simply need to start with /etc/inittab and follow the trail from there. For example, a recent Centos system has these entries in /etc/inittab

l0:0:wait:/etc/rc.d/rc l1:1:wait:/etc/rc.d/rc l2:2:wait:/etc/rc.d/rc l3:3:wait:/etc/rc.d/rc l4:4:wait:/etc/rc.d/rc l5:5:wait:/etc/rc.d/rc l6:6:wait:/etc/rc.d/rc

0 1 2 3 4 5 6

6. Explain the following with respect to Interprocess communication in Unix: A) Communication via pipes :- pipes the output from the ls command listing the directory's files into the standard input of the pr command which paginates them. Finally the standard output from the pr command is piped into the standard input of the lpr command which prints the results on the default printer. Pipes then are unidirectional byte streams which connect the standard output from one process into the standard input of another process. Neither process is aware of this redirection and behaves just as it would normally. It is the shell which sets up these temporary pipes between the processes.

7

B) Named Pipes :- Named pipes allow two unrelated processes to communicate with each other. They are also known as FIFOs (first-in, first-out) and can be used to establish a one-way (half-duplex) flow of data. Named pipes are identified by their access point, which is basically in a file kept on the file system. Because named pipes have the pathname of a file associated with them, it is possible for unrelated processes to communicate with each other; in other words, two unrelated processes can open the file associated with the named pipe and begin communication. Unlike anonymous pipes, which are process-persistent objects, named pipes are file system-persistent objects, that is, they exist beyond the life of the process. They have to be explicitly deleted by one of the processes by calling "unlink" or else deleted from the file system via the command line. In order to communicate by means of a named pipe, the processes have to open the file associated with the named pipe. By opening the file for reading, the process has access to the reading end of the pipe, and by opening the file for writing, the process has access to the writing end of the pipe. C) Message Queues :- Message queues allow one or more processes to write messages, which will be read by one or more reading processes. Linux maintains a list of message queues, the msgque vector; each element of which points to a msqid_ds data structure that fully describes the message queue. When message queues are created a new msqid_ds data structure is allocated from system memory and inserted into the vector.data structure contains an ipc_perm data structure and pointers to the messages entered onto this queue. In addition, Linux keeps queue modification times such as the last time that this queue was written to and so on. The msqid_ds also contains two wait queues; one for the writers to the queue and one for the readers of the message queue. Each time a process attempts to write a message to the write queue its effective user and group identifiers are compared with the mode in this queue's ipc_perm data structure. If the process can write to the queue then the message may be copied from the process's address space into a msg.data structure and put at the end of this message queue. Each message is tagged with an application specific type, agreed between the cooperating processes. However, there may be no room for the message as Linux restricts the number and length of messages that can be written. In this case the process will be added to this message queue's write wait queue and the scheduler will be called to select a new process to run. It will be woken up when one or more messages have been read from this message queue.

8

D) Message Structure:- Shared memory is a special range of addresses that is created by IPC for one process and appears in the address space of that process. Other processes can then 'attach' the same shared memory segment into their own address space. message store is based on a related set of files including a message store file, a directory and index file, and user files. In this structure, multiple users may have a reference in their individual files to the same message, thus the product offered a single instance message store. Message references in user files relate to message offsets stored in an indexed structure. Message offsets refer to locations within the message store file which is common to all users within a given database or "post office". • • • •

Text in character sets other than ASCII Non-text attachments Message bodies with multiple parts Header information in non-ASCII character sets

MIME's use, however, has grown beyond describing the content of e-mail to describing content type in general, including for the web (see Internet media type). Virtually all human-written Internet e-mail and a fairly large proportion of automated e-mail is transmitted via SMTP in MIME format. Internet e-mail is so closely associated with the SMTP and MIME standards that it is sometimes called SMTP/MIME e-mail.[1] The content types defined by MIME standards are also of importance outside of e-mail, such as in communication protocols like HTTP for the World Wide Web. HTTP requires that data be transmitted in the context of e-mail-like messages.

9

August 2010 Master of Computer Application (MCA) – Semester 2 MC0070 – Operating Systems with Unix Assignment Set –2 1. Describe the following with respect to Deadlocks in Operating Systems: A) Livelocks :- Although numerous inconsistent definitions of livelock have been used in the literature, the term usually connotes one of the following: Starvation: Systems with a non-zero service cost and unbounded input rate may experience starvation. For example, if an operating system kernel spends all of its time servicing interrupts then user processes will starve [11].Infinite Execution: The individual processes ofan application may run successfully, but the application as a whole may be stuck in a loop [15].For example, a na¨ıve browser loads web page A that redirects to page B that erroneously redirects back to page A. Another example is a process stuck traversing a loop in a corrupted linked list.

B) Killing Zombies :- zombie process or defunct process is a process that has completed execution but still has an entry in the process table, allowing the process that started it to read its exit status. In the term's colorful metaphor, the child process has died but has not yet been reaped...". On Unix and Unix-like computer operating systems, a zombie process or defunct process is a process that has completed execution but still has an entry in the process table. This entry is still needed to allow the process that started the (now zombie) process to read its exit status. The term zombie process derives from the common definition of zombie—an undead person. In the term's colorful metaphor, the child process has died but has not yet been reaped. Also, unlike normal processes, the kill command has no effect on a zombie process. When a process ends, all of the memory and resources associated with it are deallocated so they can be used by other processes. However, the process's entry in the process table remains. The parent can read the child's exit status by executing the wait system call, at which stage the zombie is removed. The wait call may be executed in sequential code, but it is commonly executed in a handler for the SIGCHLD signal, which the parent receives whenever a child has died. C) Pipes :• • • • •

Conceptually, a pipe is a connection between two processes, such that the standard output from one process becomes the standard input of the other process It is possible to have a series of processes arranged in a a pipeline, with a pipe between each pair of processes in the series. Implementation: A pipe can be implemented as a 10k buffer in main memory with 2 pointers, one for the FROM process and one for TO process One process cannot read from the buffer until another has written to it The UNIX command-line interpreter (e.g., csh) provide a pipe facility. 10

o o

% prog | more This command runs the prog1 program and send its output to the more program.

Pipe System Call •

• • •

pipe() is a system call that facilitates inter-process communication. It opens a pipe, which is an area of main memory that is treated as a "virtual file". The pipe can be used by the creating process, as well as all its child processes, for reading and writing. One process can write to this "virtual file" or pipe and another related process can read from it. If a process tries to read before something is written to the pipe, the process is suspended until something is written. The pipe system call finds the first two available positions in the process's open file table and allocates them for the read and write ends of the pipe. Recall that the open system call allocates only one position in the open file table.

2. Explain the following: A) Requirements for mutual exclusion :- A way of making sure that if one process is using a shared modifiable data, the other processes will be excluded from doing the same thing. Formally, while one process executes the shared variable, all other processes desiring to do so at the same time moment should be kept waiting; when that process has finished executing the shared variable, one of the processes waiting; while that process has finished executing the shared variable, one of the processes waiting to do so should be allowed to proceed. In this fashion, each process executing the shared data (variables) excludes all others from doing so simultaneously. This is called Mutual Exclusion. Note that mutual exclusion needs to be enforced only when processes access shared modifiable data - when processes are performing operations that do not conflict with one another they should be allowed to proceed concurrently. B) Mutual exclusion by using lock variables :- Mechanisms that ensure that only one person or process is doing certain things at one time (others are excluded). Have a single shared lock variable (lock) initially set to 0. When a process wants to enter a CS, it checks if lock is 0. If so, it sets it to 1 and enters CS. After it is done, it resets it to 0. Note that this is not a soln, but the same problem. We need mutual exclusion for accesses to the lock variable itself. Three elements of locking: • • •

Lock Before Using Unlock When Done Wait(or skip) if locked .

11

C) Semaphore Implementation :- A synchronization variable that takes on positive integer values. Invented by E. Dijkstra in the mid 60's. • • •

P(semaphore): an atomic operation that waits for semaphore to become positive, then decrements it by 1 ("proberen" in Dutch). V(semaphore): an atomic operation that increments semaphore by 1 ("verhogen" in Dutch). Semaphores are simple and elegant and allow the solution of many interesting problems. They do a lot more than just mutual exclusion.

Semaphores are used in two different ways: • •

Mutual exclusion: to ensure that only one process is accessing shared information at a time. If there are separate groups of data that can be accessed independently, there may be separate semaphores, one for each group of data. These semaphores are always binary semaphores. Scheduling: to permit processes to wait for certain things to happen. If there are different groups of processes waiting for different things .

3. Describe the concept of space management in file systems. Ans:- In computer operating systems, paging is one of the memory-management schemes by which a computer can store and retrieve data from secondary storage for use in main memory. In the paging memory-management scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. The main advantage of paging is that it allows the physical address space of a process to be noncontiguous. Before paging, systems had to fit whole programs into storage contiguously, which caused various storage and fragmentation problems.[1] Paging is an important part of virtual memory implementation in most contemporary general-purpose operating systems, allowing them to use disk storage for data that does not fit into physical randomaccess memory (RAM). The main functions of paging are performed when a program tries to access pages that are not currently mapped to physical memory (RAM). This situation is known as a page fault. The operating system must then take control and handle the page fault, in a manner invisible to the program. Therefore, the operating system must: 1. Determine the location of the data in auxiliary storage. 2. Obtain an empty page frame in RAM to use as a container for the data. 3. Load the requested data into the available page frame. 4. Update the page table to show the new data. 5. Return control to the program, transparently retrying the instruction that caused the page fault.

4. Explain the theory of working with files and directories in Unix file system. Ans:The Unix File System : The purpose of this lesson is to introduce you to how files and directories are handled in Unix.

12

Unix keeps track of files and directories of files using a file system. When you log in to your Unix account, you are placed in your "home" directory. Your home directory thus becomes your "present working directory" when you log in. In your home directory, you can create files and subdirectories. And in the subdirectories you create, you can create more subdirectories. The commands that you issue at the Unix prompt relate to the files and folders and resources available from your present working directory. You certainly use and can refer to resources outside of your current working directory. To understand how this works, you need to know how the Unix file system is structured. The filesystem tree : You can visualize the Unix file system as an upside down tree. At the very top of the tree is the root directory, named "/". This special directory is maintained by the Unix system administrator. Under the root directory, subdirectories organize the files and subdirectories on the system. The names of these subdirectories might be any name at all. Here is a tree diagram of a typical Unix system.

5. Explain the working of file substitution in Unix. Also describe the usage of pipes in Unix Operating system. Ans:- We can use the output of one command as an input to another

command in another way called command substitution. Command substitution is invoked when by enclosing the substituted command in backwards single quotes. For example: cat `find . -name aaa.txt` which will cat ( dump to the screen ) all the files named aaa.txt that exist in the current directory or in any subdirectory tree. Usage of Pipes :• Conceptually, • • • •

a pipe is a connection between two processes, such that the standard output from one process becomes the standard input of the other process It is possible to have a series of processes arranged in a a pipeline, with a pipe between each pair of processes in the series. Implementation: A pipe can be implemented as a 10k buffer in main memory with 2 pointers, one for the FROM process and one for TO process One process cannot read from the buffer until another has written to it The UNIX command-line interpreter (e.g., csh) provide a pipe facility. o % prog | more .

6. Describe the following with respect to Unix: A) Making Calculations with dc and bc :- UNIX has two calculator programs that you can use from the command line: dc and bc. The dc (desk calculator) program uses Reverse Polish Notation (RPN), familiar to everyone who has used Hewlett-Packard pocket calculators, 13

and the bc (basic calculator) program uses the more familiar algebraic notation. Both programs perform essentially the same calculations. Calculating with bc The basic calculator, bc, can do calculations to any precision that you specify. Therefore, if you know how to calculate pi and want to know its value to 20, 50, or 200 places, for example, use bc. This tool can add, subtract, multiply, divide, and raise a number to a power. It can take square roots, compute sines and cosines of angles, calculate exponentials and logarithms, and handle arctangents and Bessel functions. In addition, it contains a programming language whose syntax looks much like that of the C programming language. Calculating with dc As mentioned earlier, the desk calculator, dc, uses RPN, so unless you’re comfortable with that notation, you should stick with bc. Also, dc does not provide a built-in programming language, built-in math functions, or the capability to define functions. It can, however, take its input from a file. If you are familiar with stack-oriented calculators, you’ll find that dc is an excellent tool. It can do all the calculations that bc can and it also lets you manipulate the stack directly. B) Various Commands for getting user information:whoami --- returns your username. Sounds useless, but isn't. You may need to find out who it is who forgot to log out somewhere, and make sure *you* have logged out. finger & .plan files of course you can finger yourself, too. That can be useful e.g. as a quick check whether you got new mail. Try to create a useful .plan file soon. Look at other people's .plan files for ideas. The file needs to be readable for everyone in order to be visible through 'finger'. Do 'chmod a+r .plan' if necessary. You should realize that this information is accessible from anywhere in the world, not just to other people on turing. passwd --- lets you change your password, which you should do regularly (at least once a year). See the LRB guide and/or look at help password. ps -u yourusername --- lists your processes. Contains lots of information about them, including the process ID, which you need if you have to kill a process. Normally, when you have been kicked out of a dialin session or have otherwise managed to get yourself disconnected abruptly, this list will contain the processes you need to kill. Those may include the shell (tcsh or whatever you're using), and anything you were running, for example emacs or elm. Be careful not to kill your current shell - the one with the number closer to the one of the ps command you're currently running. But if it happens, don't panic. Just try again :) If you're using an X-display you may have to kill some X 14

processes before you can start them again. These will show only when you use ps -efl, because they're root processes. kill PID --- kills (ends) the processes with the ID you gave. This works only for your own processes, of course. Get the ID by using ps. If the process doesn't 'die' properly, use the option -9. But attempt without that option first, because it doesn't give the process a chance to finish possibly important business before dying. You may need to kill processes for example if your modem connection was interrupted and you didn't get logged out properly, which sometimes happens. quota -v --- show what your disk quota is (i.e. how much space you have to store files), how much you're actually using, and in case you've exceeded your quota (which you'll be given an automatic warning about by the system) how much time you have left to sort them out (by deleting or gzipping some, or moving them to your own computer). du filename --- shows the disk usage of the files and directories in filename (without argument the current directory is used). du -s gives only a total. last yourusername --- lists your last logins. Can be a useful memory aid for when you were where, how long you've been working for, and keeping track of your phonebill if you're making a non-local phonecall for dialling in. C) Switching Accounts with su:- The su command, also referred to as substitute user as early as 1979, has also been called "spoof user" or "set user" because it allows changing the account associated with the current terminal (window). It has also been incorrectly called "superuser" but what it does and why it does it has not been well understood, since the 1980s. The default account that is spoofed is the root account. In practice, the most likely reason to use it is enabling the administrator to switch from working in their non-root account, to the root account. Another reason for an admin to use it pertains to the hyphen parameter (argument). By default it does not open a login shell. Instead of using the spoofed user's environment it retains the environment of the user who invokes it. This behavior can be changed by including a hyphen as the first argument, which indicates that a login shell should be started. The most obvious differences between these two uses (login versus non-login) are changes to the default working directory and changing the path. When run from a command line, as is typical, su asks for the target user's password, and, if accepted, grants the user access to that account and all of the files associated with it. john@localhost:~$ su Password: root@localhost:/home/john# exit logout john@localhost:~$

15

Related Documents


More Documents from "Kumar M"