why cpulimit makes process STOPPED?
-
I'm running a python script which uses networkx package to run some algorithms on graphs. the script is import networkx as nx from networkx.algorithms.approximation import clique G = nx.read_adjlist("newman_one_mode.adj") print "the number of nodes in the graph is: " + str(G.number_of_nodes()) max_clique_nodes = clique.max_clique(G) print "the clique nodes are: " + str(max_clique_nodes) It takes a long time and has high cpu usage (99%), so I want to limit its cpu usage. I used cpulimit on this process to limit the cpu usage to 60% cpulimit -p 29780 -l 60 however, when I use it, the process got STOPPED, as below [lily@geland academic]$ python run.py the number of nodes in the graph is: 16264 [1]+ Stopped python run.py what is wrong and how to deal with such situations? thanks! side information: if I don't run cpulimit, the process runs for a long time and then got killed, I don't know why, maybe it is due to resource being used up. [lily@geland academic]$ python run.py the number of nodes in the graph is: 16264 [1]+ Terminated python run.py Killed
-
Answer:
That's expected behavior. cpulimit suspends the process when it consumes too much CPU resource and resume the process after a certain amount of time. Also check if your script is waiting for input? If so, your script will enter a stopped state as well. Try redirect stdin and run cpulimit again, e.g python run.py < /dev/null &
lily at Unix and Linux Visit the source
Other answers
You'd probably be better off with http://linux.die.net/man/1/nice as the way cpulimit is a bit of a hack and may play poorly with shell job control and other mechanisms. Since nice is a capability of the operating system which alters the scheduling priorities, this is much smoother than what cpulimit does which is allow a process to run as fast as it wants until it has exceeded a percentage after which it gets a SIGSTOP, followed by a sleep, and a SIGCONT. As a simple example, consider this "copy a bunch of zeros to nowhere" shell script: $ cat waster #!/bin/sh dd if=/dev/zero of=/dev/null count=${1}000000 $ time ./waster 5 # takes about 3.7 seconds on my machine $ time ./waster 10 # takes about 7.4 seconds, no surprise now run them at the same time: $ time ./waster 5 & time ./waster 10 & these take 7.1 seconds and 11.1 seconds because they are fighting for the CPU. But if I add nice $ time ./waster 5 & time nice -n 19 ./waster 10 & then the first takes about 4.0 seconds and the nice one takes 12.9 seconds because the nice one takes the lowest possible priority allowing the first to take most every bit of CPU it can. And no process gets STOPped at any point.
msw
Related Q & A:
- Why is adiabatic process isentropic?Best solution by Physics
- Why have my calendar events stopped showing up on My Yahoo! page?Best solution by Yahoo! Answers
- What is COM Surrogate and why has it stopped working?Best solution by webtlk.com
- Why has Ugly Betty stopped showing on Channel 4?Best solution by Yahoo! Answers
- Why has my msn stopped working completely?Best solution by Yahoo! Answers
Just Added Q & A:
- How many active mobile subscribers are there in China?Best solution by Quora
- How to find the right vacation?Best solution by bookit.com
- How To Make Your Own Primer?Best solution by thekrazycouponlady.com
- How do you get the domain & range?Best solution by ChaCha
- How do you open pop up blockers?Best solution by Yahoo! Answers
For every problem there is a solution! Proved by Solucija.
-
Got an issue and looking for advice?
-
Ask Solucija to search every corner of the Web for help.
-
Get workable solutions and helpful tips in a moment.
Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.