$ sleep 100 &[1] 9298
use
pidofYou can view the process ID of the specified program name:
$ pidof sleep9298
$ cat /proc/9298/maps08048000-0804b000 r-xp 00000000 08:01 977399 /bin/sleep0804b000-0804c000 rw -p 00003000 08:01 977399 /bin/sleep0804c000-0806d000 rw-p 0804c000 00:00 0 [heap]b7c8b000-b7cca000 r--p 00000000 08:01 443354...bfbd8000-bfbed000 rw-p bfbd8000 00:00 0 [stack]ffffe000-fffff000 r-xp 00000000 00:00 0 [vdso]
After the program is executed, it is loaded into memory and becomes a process. The above shows the memory image (virtual memory) of the process, including program instructions, data, and some stack space used to store program command line parameters and environment variables. The heap space used for dynamic memory application has been allocated.
For details about the program execution process on the command line, please refer to "The moment of program execution under the Linux command line".
In fact, there are other ways to create a process, that is, to let the program run. For example, through some configurations, you can automatically start the program when the system starts (for details, refer to
man init), or by configuring
crond(or
at) to let it start the program regularly. In addition, there is another way, which is to write a Shell script and write the program into a script file. When the script file is executed, the program in the file will be executed and become a process. The details of these methods will not be introduced. Let’s learn how to view the properties of the process.
One thing that needs to be added is: to execute the program under the command line, you can pass
ulimitBuilt-in commands are used to set the resources that a process can utilize, such as the maximum number of file descriptors that a process can open, the maximum stack space, virtual memory space, etc. For specific usage, see
helpulimit.
can pass
PSUse the command to view process-related attributes and status. This information includes the user to which the process belongs, the program corresponding to the process, and the
cpuand memory usage and other information. Familiarity with how to view them can help with relevant statistical analysis and other operations.
View the properties of all current processes in the system:
$ ps -ef
View the process corresponding to the program containing a certain character in the command, process
IDis 1.
TTYfor? Indicates that it is not related to the terminal:
$ ps -C init PID TTY TIME CMD 1 ? 00:00:01 init
Select processes started by a specific user:
$ ps -U falcon
Output the specified content according to the specified format. The following outputs the command name and
cpuUsage rate:
$ ps -e -o %C %c
cpuTop 4 most used programs:
$ ps -e -o %C %c | sort -u -k1 -r | head -5 7.5 firefox-bin 1.1 Xorg 0.8 scim-panel-gtk 0.2 scim-bridge
Get the five processes using the largest virtual memory:
$ ps -e -o %z %c | sort -n -k1 -r | head -5349588 firefox-bin 96612 xfce4-terminal 88840 xfdesktop 76332 gedit 58920 scim-panel-gtk
There is a "kinship" relationship between all processes in the system, which can be
pstreeCheck out this relationship:
$ pstree
The system process call tree will be printed above, and you can clearly see the calling relationship between all active processes in the current system.
$ top
The biggest feature of this command is that it can dynamically view process information. Of course, it also provides some other parameters, such as
-SYou can sort and view it according to the cumulative execution time, or you can use
-uView processes started by a specified user, etc.
Replenish:
topThe command supports interactive, for example it supports
uThe command displays all processes of the user and supports passing
kCommand to kill a process; if using
-n 1option to enable batch processing mode, the specific usage is:
$ top -n 1 -b
Let's discuss an interesting problem: How to make only one program run at the same time.
This means that while a program is being executed, it cannot be started again. So what to do?
If the same program is copied into many copies and has different file names and is placed in different locations, this will be worse, so consider the simplest case, that is, this program is unique on the entire system , and the name is also unique. In this case, what are some ways to answer the above questions?
The general mechanism is: check at the beginning of the program whether it has been executed. If it is executed, stop, otherwise continue to execute the subsequent code.
The strategies are diverse. Since the previous assumption has ensured the uniqueness of the program file name and code, so through
PSThe command finds the program names corresponding to all current processes and compares them with its own program name one by one. If it already exists, it means that it has already been run.
ps -e -o %c | tr -d | grep -q ^init$ #Check whether the current program is executed [ $? -eq 0 ] && exit #If it is, then exit, $? indicates whether the previous instruction was executed successfully
Each time it runs, first check whether there is a process that saves itself at the specified location.
IDfile, if it does not exist, then continue execution, if it exists, then view the process
IDIs it running? If so, exit, otherwise rewrite the new process to the file.
ID, and continue.
pidfile=/tmp/$0.pidif [ -f $pidfile ]; then OLDPID=$(cat $pidfile) ps -e -o %p | tr -d | grep -q ^$OLDPID$ [ $? -eq 0 ] && exitfiecho $$ > $pidfile#... Code body# Set the action of signal 0. When the program exits, the signal is triggered to delete the temporary file trap rm $pidfile 0
Feel free to use more implementation strategies yourself!
In addition to ensuring that each process can be executed smoothly, in order to allow certain tasks to be completed first, the system will use certain scheduling methods when scheduling processes, such as the common scheduling algorithm of time slice rotation according to priority. In this case, you can pass
reniceAdjust the priority of a running program, for example: `
$ ps -e -o %p %c %n | grep xfs 5089 xfs 0
$ renice 1 -p 5089renice: 5089: setpriority: Operation not permitted$ sudo renice 1 -p 5089 #Permissions are required [sudo] password for falcon:5089: old priority 0, new priority 1$ ps -e -o %p %c %n | grep xfs #Look again, the priority has been adjusted 5089 xfs 1
Since you can execute a program and create a process through the command line, there is also a way to end it. can pass
killThe command sends a signal to the process started by the user to terminate the process. Of course, it is "universal".
rootalmost
killAll processes (except
initoutside). For example,
$ sleep 50 & #Start a process [1] 11347$ kill 11347
killThe command will send a termination signal by default (
SIGTERM) to the program and let the program exit, but
killOther signals can also be sent and these can be defined via
man 7 signalYou can also view it through
kill -lList it out.
$ man 7 signal$ kill -l 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR213) SIGPIPE 14) SIGALRM 15 ) SIGTERM 16) SIGSTKFLT17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+439) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+843) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+1247) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-1451) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-1055) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-659) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-263) SIGRTMAX-1 64) SIGRTMAX
For example, use
killcommand to send
SIGSTOPSignal a program to pause and then send
SIGCONTThe signal keeps it running.
$ sleep 50 &[1] 11441$ jobs[1]+ Running sleep 50 &$ kill -s SIGSTOP 11441 #This is equivalent to us performing the CTRL+Z operation on a foreground process $ jobs[1]+ Stopped sleep 50$ kill - s SIGCONT 11441 #This is equivalent to the previous bg %1 operation we used to run a background process $ jobs[1]+ Running sleep 50 &$ kill %1 #Under the current session, you can also control the process through the job number $ jobs[1]+ Terminated sleep 50
visible
killThe command provides very good functions, but it can only be based on the process
IDor job to control the process, while
killand
killallMore options are provided that extend the ways to control a process by its program name or even its username. Please refer to their manuals for more usage.
When the program exits, how to determine whether the program exited normally or abnormally? Still remember that classic under Linux?
hello worldProgram? At the end of the code there is always a line
return 0statement. this
return 0It actually lets the programmer check whether the process exits normally. If the process returns a different value, then it can be safely said that the process exited abnormally because it has not been executed yet.
return 0This statement exits.
So how to check the exit status of the process, that is, the returned value?
exist
Shell, you can check this special variable
$?, which stores the exit status after the previous command was executed.
$ test1bash: test1: command not found$ echo $?127$ cat ./test.c | grep hello$ echo $?1$ cat ./test.c | grep hi printf(hi, myself!n);$ echo $?0
It seems that returning 0 has become an unspoken rule. Although there is no standard that clearly stipulates it, when the program returns normally, it can always be returned from
$?0 is detected in , but when abnormal, a non-0 value is always detected. This tells us that at the end of the program it is best to follow the
exit 0so that anyone can pass the test
$?Determine whether the program ends normally. If one day, someone occasionally uses your program and tries to check its exit status, but you inexplicably return a
-1Or 1, then he will be very distressed and will wonder where the problem is in the program he wrote himself. He will check it for a long time but be at a loss because he trusts you so much that he never doubts your programming habits from beginning to end. It will be different!
To facilitate design and implementation, a large task is usually divided into smaller modules. Different modules become processes after they are started. How do they communicate with each other to exchange data and work together? Many methods are mentioned in the book "Advanced Programming in UNIX Environment", such as pipes (unnamed pipes and named pipes), signals (
signal), message (
Message) queue (message queue), shared memory (
mmap/munmap), semaphore (
semaphore, mainly used for synchronization, between processes, between different threads of a process), sockets (
Socket, support process communication between different machines), etc., and in Shell, pipes and signals are usually directly used. The following mainly introduces some uses of pipes and signal mechanisms in Shell programming.
Under Linux, you can pass
|Connect two programs so that you can use it to connect the input of the latter program to the output of the previous program, so it is vividly called a pipe. In C language, it is very simple and convenient to create an unnamed pipe. Use
pipefunction, passing in a two-element
intAn array of type will do. This array actually stores two file descriptors. After the parent process writes something to the first file descriptor, the child process can read it from the first file descriptor.
If you use too many command lines, this pipe
|It should be used frequently. For example, there is a demonstration above
PSThe output of the command is as
grepCommand input:
$ ps -ef | grep init
You may think that this "pipe" is so magical that it can actually link the input and output of two programs. How are they implemented? In fact, when such a set of commands is entered, the current shell will perform appropriate parsing, associate the output of the previous process with the output file descriptor of the pipe, and associate the input of the subsequent process with the input file descriptor of the pipe. This association process Redirect functions through input and output
dup(or
fcntl) to achieve.
A named pipe is actually a file (an unnamed pipe is also like a file. Although it is related to two file descriptors, it can only be read on one side and written on the other). However, this file is quite special. It must satisfy first-in-first-out during operation, and, If you try to read from a named pipe that has no content, you will be blocked. Likewise, if you try to write to a named pipe and no program is currently trying to read it, you will be blocked. See the effect below.
$ mkfifo fifo_test #Create a famous pipe through the mkfifo command $ echo fewfefe > fifo_test #Trying to write content to the fifo_test file, but it is blocked. You need to open another terminal to continue the following operations $ cat fifo_test #Open another terminal, remember, Open another one. Trying to read out the contents of fifo_test fewfefe
Here's
echoand
catare two different programs, in this case, by
echoand
catThere is no parent-child relationship between the two started processes. However, they can still communicate through named pipes.
Such a communication method is very suitable for certain situations: for example, there is such an architecture. This architecture consists of two applications, one of which continuously reads through a loop.
fifo_testcontent in order to determine what it should do next. If this pipe has no content, then it will be blocked there without consuming resources due to an infinite loop. The other one will continue to flow as a control program.
fifo_testWrite some control information into it to tell the previous program what to do. Write a very simple example below. You can design some control codes, and then the control program continues to
fifo_testWrite in it, and then the application completes different actions based on these control codes. Of course, you can also go to
fifo_testPass in other data except the control code.
application code
$ cat app.sh #!/bin/bash FIFO=fifo_test while :; do CI=`cat $FIFO` #CI --> Control Info case $CI in 0) echo The CONTROL number is ZERO, do something ... ;; 1) echo The CONTROL number is ONE, do something ... ;; *) echo The CONTROL number not recognized, do something else... ;; esac done
Control program code
$ cat control.sh #!/bin/bash FIFO=fifo_test CI=$1 [ -z $CI ] && echo the control info should not be empty && exit echo $CI > $FIFO
One program controls the work of another program through pipes
$ chmod +x app.sh control.sh #Modify the executable permissions of these two programs so that users can execute them $ ./app.sh #Start this application in a terminal and send control through ./control.sh Check the output after the code. The CONTROL number is ONE, do something else... #After sending 1, The CONTROL number is ZERO, do something... #After sending 0, The CONTROL number is not recognized, do something else... #Send an unknown After the control code $ ./control.sh 1 #In another terminal, send control information to control the work of the application $ ./control.sh 0 $ ./control.sh 4343
Such an application architecture is very suitable for local multi-program task design. If combined with
web cgi, then it will also suit the remote control requirements. introduce
web cgiThe only change is that the control program
./control.shput in
webof
cgidirectory and make some modifications to it so that it conforms to
CGIspecifications, which include a representation of the document output format (need to output at the beginning of the file
content-tpye: text/htmland a blank line) and the acquisition of input parameters
(webInput parameters are stored in
QUERY_STRINGenvironment variables). So a very simple
CGIThe control program can be written like this:
#!/bin/bashFIFO=./fifo_testCI=$QUERY_STRING[ -z $CI ] && echo the control info should not be empty && exitecho -e content-type: text/htmlnnecho $CI > $FIFO
In actual use, please make sure
control.shable to access
fifo_testpipe, and has write permissions for control via the browser
app.sh:
http://ipaddress_or_dns/cgi-bin/control.sh?0
question mark
?The following content is
QUERY_STRING, similar to the previous
$1.
Such an application is of great practical significance for remote control, especially the remote control of embedded systems. In last year's summer course, we implemented remote control of motors in this way. First, a simple application is implemented to control the rotation of the motor, including control of speed, direction, etc. In order to achieve remote control, we designed some control codes to control different properties related to the rotation of the motor.
In C language, if you want to use a named pipe, it is similar to Shell, except that when reading and writing data, use
read,
writeCall, create
fifoUsed when
mkfifofunction call.
Signals are software interrupts that Linux users can access via
killThe command sends a specific signal to a process, or some signals can be sent through the keyboard, such as
CTRL+Cmay trigger
SGIINTsignal, while
CTRL+may trigger
SGIQUITSignals, etc. In addition, the kernel will also send signals to the process under certain circumstances, such as when accessing memory out of bounds.
SGISEGVSignals, of course, and the process itself can also pass
kill,
raiseWait for the function to send a signal to itself. For the signal types supported under Linux, you can pass
man 7 signalor
kill -lSee related lists and instructions.
For some signals, the process will have default response actions, while for some signals, the process may simply ignore them. Of course, users can also set special processing functions for certain signals. In the shell, you can pass
trapcommand (Shell built-in command) to set an action in response to a signal (a command or a defined function), and in C language you can use
signalCall the handler function registered for a signal. This is just a demonstration
trapUsage of the command.
$ function signal_handler { echo hello, world.; } #Define the signal_handler function $ trap signal_handler SIGINT #Execute this command setting: print hello, world when receiving the SIGINT signal $ hello, world #Press CTRL+C to see the screen The hello and world strings are output
Similarly, if you set the response action of signal 0, you can use
trapTo simulate the C language program
atexitRegistration of program termination function, that is, through
trap signal_handler SIGQUITset
signal_handlerThe function will be executed when the program exits. Signal 0 is a special signal in
POSIX.1Signal number 0 is defined as a null signal, which is often used to determine whether a specific process still exists. This signal is triggered when a program exits.
$ cat sigexit.sh#!/bin/bashfunction signal_handler { echo hello, world}trap signal_handler 0$ chmod +x sigexit.sh$ ./sigexit.sh #Actual Shell programming will use this method to do some cleanup when the program exits Finishing work on temporary files hello, world
When we pass multiple commands through
|,>,<, ;, (,)When combined together, this sequence of commands usually starts multiple processes that communicate through pipes, etc. Sometimes when executing a task, there are other tasks that need to be processed, so you often add an & at the end of the command sequence, or after executing the command, press
CTRL+ZCauses the previous command to pause. in order to do other tasks. After completing some other tasks, pass
fgThe command switches the background task to the foreground. Such a control process is usually called job control, and those command sequences are called jobs. This job may involve one or more programs, one or more processes. The following demonstrates several commonly used job control operations.
$ sleep 50 &[1] 11137
Use Shell built-in commands
fgBring job 1 to the foreground and press
CTRL+ZPause the process
$ fg %1sleep 50^Z[1]+ Stopped sleep 50
$ jobs # Check the current job status, one job is stopped [1] + Stopped sleep 50$ sleep 100 & # Let another job run in the background [2] 11138$ jobs # Check the current job status, one is running and one is stopped [ 1]+ Stopped sleep 50[2]- Running sleep 100 &
$ bg %1[2]+ sleep 50 &
However, to use job control under the command line, the current shell, kernel terminal driver, etc. need to support job control.
"Advanced Programming in UNIX Environment"