How to know if a file has 'access' monitor in linux?

What is the best resource to understand I/O, memory management, and multicores in Linux?

  • I have read understanding the linux kernel book, but still I don't understand how I/O is handled. Below are few confusion points. I/O devices 1. How is DMA involved in I/O transfer? 2. How does Asynchronous I/O read works? I mean how does OS know which process to interrupt? Is it based on file handle? 3. If read system call doesn't return till the data is available, what is the point of asynchronous I/O? 4. In some places I read about non-blocking I/O, how does it differ from Asynchronous I/O? 5. Will the I/O device directly write to RAM, or processor pulls data from device memory into RAM? Who takes the ownership? 6. Select system call or individual threads in web server(for example), which one is preferred and why? 7. How does processor communicate with I/O device? for example, how can processor tell cdrom to eject? 8. If there are lot of interrupts, and processor can handle only few of them, what happens to the rest? Will they be queued, in that case where are they queued in processor memory or ram? If in processor memory, what is the hardware compoent(specific name) that does this? 9. If I open a file on a disk, and do random access, programs gets very slow due to I/O? Are there any clever ways to handle this other than prefetching blocks randomly too :-) 10. Software interrupts are set by having interrupt register set to the problem code. How does hardware interrupts happen? How does OS or processor know about them? Through motherboard? Memory management. 1. Will TLB contain only the cached page table entries of current process or all the executing processes? If its only current process, then it will be a bottleneck right. 2. I read from K&R that malloc is implemented by having a header for every block, isn't it easy to corrupt malloc then? it is possible to have checks, but that make it slower right. 3. If read and mmap system calls, basically read a block, how is mmap different from read? Why should one prefer mmap over read? 4. When does virtual to physical address translation takes place? During fetch or execute phase of processor? 5. If I wish to skip my page from being removed from memory by LRU or whatever scheduler from getting removed, what should I do? 6. If everything is a file in linux, is memory also a file? Can I see the contents by executing cat? 7. How can memory be corrupted by kernel modules? 8. If linux is worried about kernel modules crashing the kernel, why not run those modules in separate threads? Multicores: 1. If my process is not multithreaded, is there any benefit of multicore processor? Doesn't it always use only one core? 2. Top shows only the processor usage, how can I see individual core usage? 3. If the OS doesn't support kernel threads, does multicore makes sense for the OS? 4. How to make my application run on multiple cores? I still have more questions on these lines only. I would be happy if people can answer these questions, or point me to a book where I can find answers to these. Thanks.

  • Answer:

    The best resource as mentioned by Tristan is the source. Obviously, it's not that easy to dive into thousands of lines of code. The following resources are not just for the questions you asked, but more in general for understanding linux kernel. I suggest starting with reading the books explaining Linux kernel. Understanding Linux Kernel. http://oreilly.com/catalog/9780596005658/. This is one of the first books to provide in-depth explanations. Vastly improved over multiple editions, the current one is a very good read. Robert Loves' book. http://www.amazon.com/Linux-Kernel-Development-Robert-Love/dp/0672329468. Love is a core developer, who implemented pre-emptive kernel and other features. I haven't personally read this, but I have seen good reviews of this.  If you prefer to read a digital copy of this tome with >400 pages, here's the URL for the PDF:  http://stid.googlecode.com/files/Linux.Kernel.Development.3rd.Edition.pdf Other resources LWN's kernel page (http://lwn.net/Kernel/Index/) has lots of great articles explaining kernel internals TLK mentioned by Tristan. http://tldp.org/LDP/tlk/tlk.html If you are specifically looking for networking aspects, this is an excellent book. http://oreilly.com/catalog/9780596002558 For device drivers, similarly, this book is very useful. http://oreilly.com/catalog/9780596005900/ Linux Journal's KernelKorner has some excellent articles, most of which are freely available online. There's a huge index of links at http://jungla.dit.upm.es/~jmseyas/linux/kernel/hackers-docs.html that has links to many resources.

Pradeep Padala at Quora Visit the source

Was this solution helpful to you?

Other answers

I'd like to recommend two textbooks which cover a lot of this material: Computer Organization and Design, Patterson and Hennessy: http://www.amazon.com/Computer-Organization-Design-Fourth-Architecture/dp/0123744938/ref=sr_1_3?ie=UTF8&qid=1316195722&sr=8-3 Operating System Concepts, Silberschatz, Galvin, Gagne: http://www.amazon.com/Operating-System-Concepts-Abraham-Silberschatz/dp/0470128720/ref=sr_1_1?ie=UTF8&qid=1316195712&sr=8-1 I can try and answer a few of these. The I/O questions are very specific, and source code might be your best bet for some of them. The two textbooks should cover the rest. Memory management: 1. Will TLB contain only the cached page table entries of current process or all the executing processes? If its only current process, then it will be a bottleneck right. It depends on the architecture. doesn't have a tagged TLB, so it only has entries for the executing thread. This means it has to do a flush on every context switch to a new process. 3. If read and mmap system calls, basically read a block, how is mmap different from read? Why should one prefer mmap over read? mmap is a lot more general, and provides a different interface. You can use it for sharing memory between processes, and it supports things like copy-on-write semantics and preemptively populating pages. Memory can also be pinned via mlock(). 5. If I wish to skip my page from being removed from memory by LRU or whatever scheduler from getting removed, what should I do? mlock() pins the pages in RAM so they won't get swapped out. 7. How can memory be corrupted by kernel modules? 8. If linux is worried about kernel modules crashing the kernel, why not run those modules in separate threads? Since kernel modules all share the same address space, there isn't strong protection like there is for separate processes. Pretend that some broken kernel module decides to start zeroing out the kernel address space. Even if it's running in a separate thread, there's no way to clean up the damage, and we've got memory corruption. This is part of the microkernel vs. monolithic kernel argument. Microkernels do run separate components in different processes, which can fail separately, be cleaned up, and then restarted (at least in theory). Multicores: 1. If my process is not multithreaded, is there any benefit of multicore processor? Doesn't it always use only one core? It's only going to use one core, but you won't have to share as much with other processes. Thus, potential improvement. Turbo Boost on processors can increase single-core performance by turning off idle excess cores. 2. Top shows only the processor usage, how can I see individual core usage? Try using htop instead of top. It shows per-core, and it's nicely colorized. 3. If the OS doesn't support kernel threads, does multicore makes sense for the OS? Without the idea of kernel-threads (what OS doesn't these days?), the OS won't be able to schedule more than one thread at once. With a n:1 user thread:kernel thread model, you're still stuck only running one real thread at once. So, no, it doesn't really make sense to me. 4. How to make my application run on multiple cores? Always a difficult problem. Automatic parallelism is almost always data parallelism, things like the "parallel for loop" construct in . I don't think true fine grained parallelism has seen many true solutions, you're more or less stuck doing it with pthreads and locks. I've also heard good things about Intel's Thread Building Blocks, but don't have any direct experience with it.

Andrew Wang

The best resource is always going to be the source code itself. Reading the source and understanding what is actually going on will answer your questions. The next best resource will be the mailing lists devoted to the various kernel subsystems. Kernel hackers discuss and explain the reasoning behind code inclusion, exclusion, and modification. After that, the internet will give you answers provided you ask the right questions. Here is a (slightly outdated) link to a site which attempts to explain how the kernel works http://tldp.org/LDP/tlk/tlk.html

Tristan Irwin

Sorry if my explanation is not very "academic", I just learned on the go 1. How is DMA involved in I/O transfer? Simply access to the hardware at a specific address. Let's say you read/write your communication to a particular device.   2. How does Asynchronous I/O read works? I mean how does OS know which process to interrupt? Is it based on file handle? Async I/O basically allows the processor to perform an activity without waiting for the device to finish the current operation. Let's say you are reading a particular sector of a hard drive, you can't read another one at the same time. Some operations allow async i/o, others don't 3. If read system call doesn't return till the data is available, what is the point of asynchronous I/O? It depends on the SO but there are some activities you can indeed read more than once at the same time (without waiting for the first one to finish) 4. In some places I read about non-blocking I/O, how does it differ from Asynchronous I/O? patata potato, if a I/O is non-blocking it can be called async 5. Will the I/O device directly write to RAM, or processor pulls data from device memory into RAM? Who takes the ownership? network card example, the network card has a "ring buffer", it receives data and raises the interrupt to the cpu (you already have a sub-routine matching the interrupt) the cpu raises the interrupt to whatever layer captures the int (most likely the driver), the driver moves the data from the network card to it's own buffer and it communicates with the application to let the app knows there is data to read. experts: I know I'm summarizing A LOT and I will receive a lot of corrections, I'm just trying to explain the basics (and I'm not an expert) 6. Select system call or individual threads in web server(for example), which one is preferred and why? not sure what you mean 7. How does processor communicate with I/O device? for example, how can processor tell cdrom to eject? same thing, interrupt and parameters. Usually you won't get access to the interrupts at least you are developing the driver or you are in a pseudo-SO (like DOS) 8. If there are lot of interrupts, and processor can handle only few of them, what happens to the rest? Will they be queued, in that case where are they queued in processor memory or ram? If in processor memory, what is the hardware compoent(specific name) that does this? interrupts have priorities, starting from zero (clock). And yes, it will be a queue 9. If I open a file on a disk, and do random access, programs gets very slow due to I/O? Are there any clever ways to handle this other than prefetching blocks randomly too :-) You should use a buffered approach, already provided by the SO layer for sure 10. Software interrupts are set by having interrupt register set to the problem code. How does hardware interrupts happen? How does OS or processor know about them? Through motherboard? It's part of the architecture. You have a interrupt list with addresses pointing to routines to execute, when you register something new (through the driver in most of the cases you just change that to add your subroutine followed by the original one). You may share the interrupt but that's a different story Memory management. 1. Will TLB contain only the cached page table entries of current process or all the executing processes? If its only current process, then it will be a bottleneck right. 2. I read from K&R that malloc is implemented by having a header for every block, isn't it easy to corrupt malloc then? it is possible to have checks, but that make it slower right. 3. If read and mmap system calls, basically read a block, how is mmap different from read? Why should one prefer mmap over read? 4. When does virtual to physical address translation takes place? During fetch or execute phase of processor? 5. If I wish to skip my page from being removed from memory by LRU or whatever scheduler from getting removed, what should I do? 6. If everything is a file in linux, is memory also a file? Can I see the contents by executing cat? 7. How can memory be corrupted by kernel modules? 8. If linux is worried about kernel modules crashing the kernel, why not run those modules in separate threads? no idea about memory management Multicores: 1. If my process is not multithreaded, is there any benefit of multicore processor? Doesn't it always use only one core? you have the benefit, if the cpu decides two instructions can be executed at the same time because there is no dependency you will see the benefit 2. Top shows only the processor usage, how can I see individual core usage? don't you see that pressing "1"? 3. If the OS doesn't support kernel threads, does multicore makes sense for the OS? AFAIK, no but I may be wrong 4. How to make my application run on multiple cores? It depends on the SO but usually (again, experts will correct me) it's done by the so and not from your app

Sebastian Silva

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.