$ i=0;$ ((i++))$ echo $i1$ let i++$ echo $i2$ expr $i + 13$ echo $i2$ echo $i 1 | awk '{printf $1+$2}'3
illustrate:
exprafter
$i,
+, 1 separated by spaces. If multiplication is performed, the operator needs to be escaped, otherwise the Shell will interpret the multiplication sign as a wildcard character, resulting in a syntax error;
awkbehind
$1and
$2respectively refer to
$iand 1, that is, the 1st and 2nd numbers from left to right.
Use Shell's built-in commands to view the types of each command as follows:
$ type typetype is a shell builtin$ type letlet is a shell builtin$ type exprexpr is hashed (/usr/bin/expr)$ type bcbc is hashed (/usr/bin/bc)$ type awkawk is /usr/bin/awk
It can be seen from the above demonstration:
letIt is a built-in command of Shell, and the other several are external commands, all in
/usr/bindirectory. and
exprand
bcBecause it has just been used, it has been loaded into the memory.
hashtable. This will help us understand the principles behind the various execution methods of scripts introduced in the previous chapter.
Note: If you want to view help for different commands, for
letand
typeWaiting for the Shell built-in command, you can use a built-in command of the Shell
helpto view related help, and some external commands can be accessed through an external command of Shell
manto view help, usage such as
help let,
manexprwait.
#!/bin/bash# calc.shi=0;while [ $i -lt 10000 ]do ((i++)) doneeecho $i
Description: Pass here
while [conditional expression]; do .... doneCycle to achieve.
-ltis less than sign
<, see for details
testCommand usage:
man test.
How to execute this script?
Method 1: Directly pass the script file as a parameter of the sub-Shell (Bash)
$ bash calc.sh$ type bashbash is hashed (/bin/bash)
Method 2: Pass
bashbuilt-in commands
.or
sourceimplement
$ . ./calc.sh
or
$ source ./calc.sh$ type .. is a shell builtin$ type sourcesource is a shell builtin
Method 3: Modify the file to be executable and execute it directly under the current Shell
$ chmod ./calc.sh $ ./calc.sh
Next, we will demonstrate one by one the use of other methods to calculate the variable plus one, that is,
((i++))line with one of the following:
let i++;i=$(expr $i + 1)i=$(echo $i+1|bc)i=$(echo $i 1 | awk '{printf $1+$2;}')
The comparison calculation time is as follows:
$ time calc.sh10000real 0m1.319suser 0m1.056ssys 0m0.036s$ time calc_let.sh10000real 0m1.426suser 0m1.176ssys 0m0.032s$ time calc_expr.sh1000real 0m27.425suser 0m5.060ssys 0m14.177s$ time calc_bc.sh1000real 0m56.576suser 0m9.353ssys 0m24.618s$ time ./calc_awk.sh100real 0m11.672suser 0m2.604ssys 0m2.660s
illustrate:
timeThe command can be used to count the command execution time. This part of time includes the total running time, user space execution time, and kernel space execution time. It passes
ptraceSystem call implementation.
Through the above comparison, it can be found that
(())has the highest operational efficiency. and
letAs a Shell built-in command, it is also very efficient, but
expr,
bc,
awkThe calculation efficiency is relatively low. Therefore, if the Shell itself can complete the relevant work, it is recommended to give priority to the functions provided by the Shell itself. But there are functions that the Shell itself cannot complete, such as floating point operations, so the help of external commands is needed. In addition, considering the portability of Shell scripts, do not use certain Shell-specific syntax when performance is not critical.
let,
expr,
bccan be used to find modulo, and the operators are all
%,and
letand
bcIt can be used to find exponentiation. The operators are different. The former is
**, the latter is
^. For example:
$ expr 5 % 21$ let i=5%2$ echo $i1$ echo 5 % 2 | bc1$ ((i=5%2))$ echo $i1
$ let i=5**2$ echo $i25$ ((i=5**2))$ echo $i25$ echo 5^2 | bc25
Base conversion is also a relatively common operation. You can use
BashBuilt-in support for
bcTo complete, for example, to convert 11 in octal to decimal, you can:
$ echo obase=10;ibase=8;11 | bc -l9$ echo $((8#11))9
The above all convert numbers in a certain base system into base 10. If you want to convert between any base systems, still
bcMore flexible because it can be used directly
ibaseand
obaseSpecify the base source and base conversion target respectively.
If you want to express some strings in a specific base, you can use
odcommand, such as the default separator
IFSincluding spaces,
TABAs well as line breaks, you can use
man asciievidence.
$ echo -n $IFS | od -c0000000 t n0000003$ echo -n $IFS | od -b0000000 040 011 0120000003
letand
exprNeither can perform floating point operations, but
bcand
awkCan.
$ echo scale=3; 1/13 | bc.076$ echo 1 13 | awk '{printf(%0.3fn,$1/$2)}'0.077
illustrate:
bcThe precision needs to be specified when performing floating point operations, otherwise it defaults to 0, that is, when performing floating point operations, the default result only retains integers. and
awkVery flexible when controlling the number of decimal places, just by
printfFormat control can be achieved.
Supplement: In use
bcWhen performing operations, if not used
scaleSpecify precision while in
bcadd after
-loption, floating point operations can also be performed, but the default precision at this time is 20 digits. For example:
$ echo 1/13100 | bc -l.00007633587786259541
use
bc -lCalculation can achieve high accuracy:
$ export cos=0.996293; echo scale=100; a(sqrt(1-$cos^2)/$cos)*180/(a(1)*4) | bc -l4.9349547554113836327198340369318406051597063986552438753727649177325495504159766011527078286004072131
Of course it can also be used
awkTo calculate:
$ echo 0.996293 | awk '{ printf(%sn, atan2(sqrt(1-$1^2),$1)*180/3.1415926535);}'4.93495
A set of test data is randomly generated here, and the file name is
income.txt.
1 3 44902 5 38963 4 31124 4 47165 4 45786 6 53997 3 50898 6 30299 4 619510 5 5145
Note: The three columns of data above are family number, family size, and total monthly family income.
Analysis: In order to find the family with the highest average monthly income, you need to divide the following two columns, that is, find the average monthly income of each family, and then sort according to the average monthly income to find the family with the highest income.
accomplish:
#!/bin/bash# gettopfamily.sh[ $# -lt 1 ] && echo please input the income file && exit -1[ ! -f $1 ] && echo $1 is not a file && exit -1income=$1awk '{ printf(%d %0.2fn, $1, $3/$2);}' $income | sort -k 2 -n -r
illustrate:
[ $# -lt 1 ]: Requires at least one parameter to be entered,
$#Is the number of parameters passed in the Shell
[ ! -f $1 ]: Requires the input parameter to be a file,
-fFor usage, see
testOrder,
man test
income=$1: Assign the input parameters to the income variable, and then use it as
awkparameters, that is, the files to be processed
awk: Divide the third column of the file by the second column to find the average monthly income. Taking into account the accuracy, two digits of precision are retained.
sort -k 2 -n -r:Here is the result of
awkThe second column of the result
-k 2, that is, the average monthly income is sorted, sorted by numbers
-n, and sorted in descending order
-r.
Demo:
$ ./gettopfamily.sh income.txt7 1696.339 1548.751 1496.674 1179.005 1144.5010 1029.006 899.832 779.203 778.008 504.83
Supplement: previous
income.txtData is generated randomly. When doing some experiments, it is often necessary to randomly generate some data. In the next section, we will introduce it in detail. Here is generated
income.txtData script:
#!/bin/bash# genrandomdata.shfor i in $(seq 1 10)do echo $i $(($RANDOM/8192+3)) $((RANDOM/10+3000))done
Note: Also used in the above script
seqThe command generates a column of numbers from 1 to 10. The detailed usage of this command will be further introduced in the last section of this article.
environment variables
RANDOMGenerate random numbers from 0 to 32767, while
awkof
rand()The function can generate random numbers between 0 and 1.
$ echo $RANDOM81$ echo | awk '{srand(); printf(%f, rand());}'0.237788
illustrate:
srand()When there is no parameter, the current time is used as
rand()A random number generator
seed.
can pass
RANDOMscaled sum of variables
awkmiddle
rand()to achieve amplification.
$ expr $RANDOM / 128$ echo | awk '{srand(); printf(%dn, rand()*255);}'
Thinking: If you want to randomly generate an IP address for a certain IP segment, how should you do it? See example: Friendly obtain a usable IP address.
#!/bin/bash# getip.sh -- get an usable ipaddress automatically# author: falcon <[email protected]># update: Tue Oct 30 23:46:17 CST 2007# set your own network, default gateway , and the time out of ping commandnet=192.168.1default_gateway=192.168.1.1over_time=2# check the current ipaddressping -c 1 $default_gateway -W $over_time[ $? -eq 0 ] && echo the current ipaddress is okey! && exit -1;while :; do # clear the current configuration ifconfig eth0 down # configure the ip address of the eth0 ifconfig eth0 $net.$(($RANDOM /130 +2)) up # configure the default gateway route add default gw $default_gateway # check the new configuration ping -c 1 $default_gateway -W $over_time # if work, finish [ $? -eq 0 ] && breakdone
Note: If your default gateway address is not
192.168.1.1, please configure it yourself
default_gateway(can be used
route-ncommand to view), because using
ifconfigWhen configuring the address, you cannot configure it as the gateway address, otherwise your IP address will be the same as the gateway, causing the entire network to not work properly.
In fact, a series of numbers can be generated through a loop, but why not use it if there are related tools!
seqIt is such a small tool that can generate a series of numbers. You can specify the increasing interval of the numbers, or you can specify the separator between two adjacent numbers.
$ seq 512345$ seq 1 512345$ seq 1 2 5135$ seq -s: 1 2 51:3:5$ seq 1 2 14135791113$ seq -w 1 2 1401030507091113$ seq -s: -w 1 2 1401:03:05:07:09:11:13$ seq -f 0x%g 1 50x10x20x30x40x5
A more typical use
seqFor example, construct some links in a specific format, and then use
wgetDownload these:
$ for i in `seq -fhttp://thns.tsinghua.edu.cn/thnsebooks/ebook73/%02g.pdf 1 21`;do wget -c $i; done
or
$ for i in `seq -w 1 21`;do wget -c http://thns.tsinghua.edu.cn/thnsebooks/ebook73/$i; done
Supplement: in
BashVersion 3 and above, in
forcircular
inBehind, you can pass directly
{1..5}More concisely generate numbers from 1 to 5 (note that there are only two dots between 1 and 5), for example:
$ for i in {1..5}; do echo -n $i; done1 2 3 4 5
Let's first give a definition of a word: a single or multiple character series composed of letters.
First, count the number of occurrences of each word:
$ wget -c http://tinylab.org$ cat index.html | sed -es/[^a-zA-Z]/n/g | grep -v ^$ | sort | uniq -c
Then, count the top 10 most frequently occurring words:
$ wget -c http://tinylab.org$ cat index.html | sed -es/[^a-zA-Z]/n/g | grep -v ^$ | sort | uniq -c | sort -n -k 1 -r | head -10 524 a 238 tag 205 href 201 class 193 http 189 org 175 tinylab 174 www 146 div 128 titles
illustrate:
cat index.html: Output the contents of the index.html file
sed -es/[^a-zA-Z]/n/g: Replace non-alphabetic characters with spaces, keeping only alphabetic characters
grep -v ^$: remove blank lines
sort: Sort
uniq -c: Count the number of the same lines, that is, the number of each word
sort -n -k 1 -r: According to the first column
-k 1number
-nReverse order
-rsort
head -10: Take out the first ten lines
Two approaches can be considered:
Only count those words that need to be counted
Use the above algorithm to count the number of all words, and then return the words that need to be counted to the user
However, both methods can be implemented through the following structure. Let’s look at method one first:
#!/bin/bash# statistic_words.shif [ $# -lt 1 ]; then echo Usage: basename $0 FILE WORDS .... exit -1fiFILE=$1((WORDS_NUM=$#-1))for n in $( seq $WORDS_NUM)do shift cat $FILE | sed -es/[^a-zA-Z]/n/g | grep -v ^$ | sort | grep ^$1$ | uniq -cdone
illustrate:
if condition part: Requires at least two parameters, the first word file, and the subsequent parameters are the words to be counted.
FILE=$1: Get the file name, which is the first string after the script
((WORDS_NUM=$#-1)): Get the number of words, that is, the total number of parameters
$#minus the filename parameter (1)
for loop part: Pass first
seqGenerate a word number series that needs to be counted,
shiftis a Shell built-in variable (please pass
help shiftGet help), it moves the parameters passed in by the user from the command line backward in sequence, and uses the current parameter as the first parameter.
$1, passed like this
$1You can traverse all the words entered by the user (if you think about it carefully, it seems like an array subscript). You can consider
shiftReplace the next sentence with
echo $1test
shiftUsage
Demo:
$ chmod +x statistic_words.sh$ ./statistic_words.sh index.html tinylab linux python 175 tinylab 43 linux 3 python
Let’s look at method two, we just need to modify
shiftThe next sentence is enough:
#!/bin/bash# statistic_words.shif [ $# -lt 1 ]; then echo ERROR: you should input 2 words at least; echo Usage: basename $0 FILE WORDS .... exit -1fiFILE=$1((WORDS_NUM= $#-1))for n in $(seq $WORDS_NUM)do shift cat $FILE | sed -es/[^a-zA-Z]/n/g | grep -v ^$ | sort | uniq -c | grep $1$done
Demo:
$ ./statistic_words.sh index.html tinylab linux python 175 tinylab 43 linux 3 python
Explanation: Obviously, method one is much more efficient, because it finds the words that need to be counted in advance and then counts them, while the latter is not the case. In fact, if you use
grepof
-Eoption, we don’t need to introduce a loop, but can do it with one command:
$ cat index.html | sed -es/[^a-zA-Z]/n/g | grep -v ^$ | sort | grep -E ^tinylab$|^linux$ | uniq -c 43 linux 175 tinylab
or
$ cat index.html | sed -es/[^a-zA-Z]/n/g | grep -v ^$ | sort | egrep ^tinylab$|^linux$ | uniq -c 43 linux 175 tinylab
Description: Need to pay attention to
sedCommands can process files directly without passing
catThe command output is then passed through the pipeline, which can reduce unnecessary pipeline operations, so the above command can be simplified to:
$ sed -es/[^a-zA-Z]/n/g index.html | grep -v ^$ | sort | egrep ^tinylab$|^linux$ | uniq -c 43 linux 175 tinylab
So, it can be seen that these commands
sed,
grep,
uniq,
sortHow useful they are. Although they only perform simple functions by themselves, through certain combinations, they can achieve a variety of things. By the way, there is also a very useful command for counting words.
wc -w, you can also use it when needed.
Supplement: It is also mentioned in the Advanced Bash-Scripting Guide book
jotcommand and
factorcommand, since it is not available on the machine, there is no test.
factorcommand can generate all prime numbers of a certain number. like:
$ factor 100100: 2 2 5 5
At this point, the numerical calculation of the Shell programming example ends. This article mainly introduces:
Integer operations, floating point operations, random number generation, and sequence generation in Shell programming
The difference between Shell's built-in commands and external commands, and how to view their types and help
Several ways to execute Shell scripts
Several commonly used Shell external commands:
sed,
awk,
grep,
uniq,
sortwait
Example: increasing numbers; finding average monthly income; automatically obtaining
IPAddress; count the number of words
Others: Related usages such as command lists, conditional tests, etc. have been covered in the above examples, please read them carefully.
If you have time, please review it.
Advanced Bash-Scripting Guide
shell thirteen questions
Twelve articles on shell basics
SED manual
AWK User Manual
Several Shell discussion forums
LinuxSir.org
ChinaUnix.net
It took me more than 3 hours to finish writing. It is currently 23:33. It’s time to go back to the dormitory and go to bed. I will correct typos and add some content tomorrow. Good night, friends!
On October 31st, some wordings were modified, an example of calculating average monthly household income was added, a summary and references were added, and all codes were appended.
Shell programming is a very interesting thing, if you think about: the above example of calculating the average monthly household income, and then use
M$ ExcelComparing this work, you will find that the former is so simple and hassle-free, and it gives you a feeling of ease of use.