一些常用的linux命令

一些常用的linux命令,第1张

这里写自定义目录标题
    • :fire:Terminal终端:

🔥Terminal终端:
  • git remote set-url origin https://codeXXXXXXXXXXXXXXXXXXXX@github.com/elegantcoin/utils.git 登录git
  • rsync --bwlimit=1000 -artP file1 -e 'ssh -p xxxx' user@targetmachine:file2
  • cat cmd_lines_2022-05_15.log|egrep -E "history lines |\(103\)"
  • split -l 1000 -a 3 bigfiles.txt smallfiles.txt split files
  • mysql/redis-cli -h service -P xxxx -u username -p password mysql
  • docker ps docker
  • docker commit 1234 softeorename:v2.0 docker commit
  • docker save -o softeorename.tar softeorename:v2.0 docker save
  • docker load -i softeorename.tar docker load
  • docker run -dit -v /tmp/localpath:home --name softeorename softeorename:v2.0
  • docker exec -it 100 /bin/bash
  • keys * / get key 100 redis
  • tar -cvf compress.tar /filefolder --exclude=./.ssh tar
  • tar cf /tmp/compress.tar /big/tree --exclude-from <(find /big/tree -size +3M) tar
  • tar czf /tmp/compress.tar |split -b 1000m - small-file.tar split tar file
  • find /tmp/path -3M|xargs tar cf /tmp/compress.tar tar fills <3M in path
  • find ./ -name "*" -size +3M find files >3M
  • find ./ -name "*.txt|xargs cat >3.txt" cat into one file
  • wget https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh get ananconda
  • vim ~/.bash_history
  • vim ~/.bashrc
  • vim ~/.bash_profile
  • nohup jupyter 2>&1 & nohup
  • jupyter notebook >jupyter.log 2>&1 &
  • ps -aux |egrep test.sh|egrep -v grep|awk '${print ps -aux|egrep python |xargs kill -9}'|sort -k 2 -n
  • ps -eo pid,sgi_p,cmd --sort sgi_p|egrep "python" careful if root user
  • magick -loop 0 -delay 10 fram*.jpg frame.gif
  • tree -df generate gifs
  • rsvg-convert -z 10.0 pprof001_true.svg >pprof001.png convert to png
  • tree -L 3 tree dir
  • tree -dfIL "*anaconda*|*git*" 3 tree 3 levels
  • rename 's/\.jpeg/\.jpg' *.jpeg tree ignore keywords
  • cat my_log.log|egrep ''$'\t123'$'\tsname'|sort -k 1 -k 2 rename using regx
  • ls --sort=none egrep \t and sort
  • aek '! a[sed -i'.bak' 's/words/word/g' my_log.log]++' no sorting when ls(useful in huge files folder)
  • python -m cProfile my_script.py drop duplicate
  • go tool pprof cpu.out replace global
  • /bin/spark2-submit --master yarn --deploy-mode client --queue username --driver-memory 32g --executor-cores 10 --executor-memory 32g cProfile
  • go Profile
  • # Python zip() behavior in bash spark
  • file_names
    =
    $(ls)file_arr *.txt=
    ($file_names)var_arr=
    ($file_arr)i=
    0fors_one
    in $file_names ; do((++
    ))i# head my_log.log|awk -F ',' '{printf("%s %s\n",split(my_count,var_arr,","),var_arr[1])}=
    $(
    cat|wc) $s_one my_content= -l$(
    sed's/\n/\\n/g')echo [ $s_oneecho
    \n "${var_arr${s_one}${i}]}"
    ${my_count} "#\n#\n|${my_content}\n\nmv$ifile" >>all.txt
    ${s_one} file_
  • ``
  • .qgz # function in bashauto_triger.qgz
      (
        )
        msg= {
            echo${msg}}
            "this is a func" 
  • ``
  • # crontab -l -e auto_triger 30
      cd
        &&
        sh 09 * * * 2 /tmp/robot &1  start_robot.sh >>robot.log >
    

    欢迎分享,转载请注明来源:内存溢出

    原文地址:https://54852.com/langs/921598.html

    (0)
    打赏 微信扫一扫微信扫一扫 支付宝扫一扫支付宝扫一扫
    上一篇 2022-05-16
    下一篇2022-05-16

    发表评论

    登录后才能评论

    评论列表(0条)

      保存