http://www.boost.org/doc/libs/1_42_0/doc/html/interprocess.html, especially http://www.boost.org/doc/libs/1_42_0/doc/html/interprocess/allocators_containers.html#interprocess.allocators_containers.containers_explained
Be careful if you use a file though. If you mmap a file and the machine dies, there are no consistency guarantees -- the kernel can output different sections of the file at different times. You should take a snapshot of the file if you want to be safe in the case of power failures.
Another thing to be careful of is memory fragmentation. If you go this route Boost will be doing its
http://www.boost.org/doc/libs/1_42_0/doc/html/interprocess.html, especially http://www.boost.org/doc/libs/1_42_0/doc/html/interprocess/allocators_containers.html#interprocess.allocators_containers.containers_explained
Be careful if you use a file though. If you mmap a file and the machine dies, there are no consistency guarantees -- the kernel can output different sections of the file at different times. You should take a snapshot of the file if you want to be safe in the case of power failures.
Another thing to be careful of is memory fragmentation. If you go this route Boost will be doing its own memory allocation. There are a number of allocators they offer, and I'm not sure how well tuned they are. If you are using a lot of memory, you should have monitoring for fragmentation. If possible, you should avoid allocating short-lived, per request data in the shared space.
Every STL object takes an "alloc" template parameter. The default functionality is to use 'new'. You can pass in a custom allocation object that you create. This document appears to discuss exactly your subject, STL types going through threads.
http://www.drdobbs.com/184406243
Another interesting approach is to override the new operator. This is dangerous though so maybe make a custom new operator (this can also be overridden!). Ah the joys of C++, where everything is customizable...
Where do I start?
I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.
Here are the biggest mistakes people are making and how to fix them:
Not having a separate high interest savings account
Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.
Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.
Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of th
Where do I start?
I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.
Here are the biggest mistakes people are making and how to fix them:
Not having a separate high interest savings account
Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.
Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.
Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of the biggest mistakes and easiest ones to fix.
Overpaying on car insurance
You’ve heard it a million times before, but the average American family still overspends by $417/year on car insurance.
If you’ve been with the same insurer for years, chances are you are one of them.
Pull up Coverage.com, a free site that will compare prices for you, answer the questions on the page, and it will show you how much you could be saving.
That’s it. You’ll likely be saving a bunch of money. Here’s a link to give it a try.
Consistently being in debt
If you’ve got $10K+ in debt (credit cards…medical bills…anything really) you could use a debt relief program and potentially reduce by over 20%.
Here’s how to see if you qualify:
Head over to this Debt Relief comparison website here, then simply answer the questions to see if you qualify.
It’s as simple as that. You’ll likely end up paying less than you owed before and you could be debt free in as little as 2 years.
Missing out on free money to invest
It’s no secret that millionaires love investing, but for the rest of us, it can seem out of reach.
Times have changed. There are a number of investing platforms that will give you a bonus to open an account and get started. All you have to do is open the account and invest at least $25, and you could get up to $1000 in bonus.
Pretty sweet deal right? Here is a link to some of the best options.
Having bad credit
A low credit score can come back to bite you in so many ways in the future.
From that next rental application to getting approved for any type of loan or credit card, if you have a bad history with credit, the good news is you can fix it.
Head over to BankRate.com and answer a few questions to see if you qualify. It only takes a few minutes and could save you from a major upset down the line.
How to get started
Hope this helps! Here are the links to get started:
Have a separate savings account
Stop overpaying for car insurance
Finally get out of debt
Start investing with a free bonus
Fix your credit
you might want to check out STXXL ...
Even if you go down the custom allocator route, i.e. create a memory mapped file and then create a STL map structure with a custom allocator, the problem is pointers.
You have to make sure the memory mapped file always loads from the same exact memory address. You can do this on some operating systems, i.e. on Windows, you can choose the address with MapViewOfFileEx.
You could replace pointers by a template pointer class that keeps the load address of mmap() as a static member, and the offset relative to the mmap area as a member.

Using C++ STL data structures with memory-mapped files (mmap) requires careful consideration because STL containers expect to manage their own memory. However, you can leverage mmap to work with STL containers in a way that allows for efficient memory usage while ensuring data integrity. Here’s a general approach to using STL containers with mmap:
Steps to Use C++ STL with mmap
- Memory Mapping: First, create a memory-mapped file using
mmap
. This allows you to map a file or device into memory, enabling you to access the file as if it were a part of the process's memory. - Placement New: Use placement
Using C++ STL data structures with memory-mapped files (mmap) requires careful consideration because STL containers expect to manage their own memory. However, you can leverage mmap to work with STL containers in a way that allows for efficient memory usage while ensuring data integrity. Here’s a general approach to using STL containers with mmap:
Steps to Use C++ STL with mmap
- Memory Mapping: First, create a memory-mapped file using
mmap
. This allows you to map a file or device into memory, enabling you to access the file as if it were a part of the process's memory. - Placement New: Use placement new to construct STL objects in the mmap'ed region. This allows you to create objects at a specific memory location.
- Custom Allocator: Consider implementing a custom allocator that uses the mmap'ed memory for allocation. This allocator can be passed to STL containers to ensure they allocate memory from the mapped region.
- Object Lifetime Management: Be cautious about the lifetime of the objects. You need to ensure that objects are properly destroyed when they are no longer needed, especially if you're using placement new.
- Synchronization: If the mmap'ed memory is shared between multiple processes, you'll need to implement synchronization mechanisms (like mutexes) to prevent data races.
Example
Here’s a basic example demonstrating the concept:
- #include <iostream>
- #include <sys/mman.h>
- #include <fcntl.h>
- #include <unistd.h>
- #include <new>
- #include <vector>
- class MmapAllocator {
- public:
- using value_type = char;
- MmapAllocator() = default;
- template <typename U>
- MmapAllocator(const MmapAllocator<U>&) {}
- char* allocate(std::size_t n) {
- return static_cast<char*>(::operator new(n));
- }
- void deallocate(char* p, std::size_t) {
- ::operator delete(p);
- }
- };
- template <typename T, typename U>
- bool operator==(const MmapAllocator<T>&, const MmapAllocator<U>&) { return true; }
- template <typename T, typename U>
- bool operator!=(const MmapAllocator<T>&, const MmapAllocator<U>&) { return false; }
- int main() {
- const char* filename = "mmap_example.dat";
- const size_t size = 1024;
- // Create and open the file
- int fd = open(filename, O_RDWR | O_CREAT, 0666);
- ftruncate(fd, size);
- // Memory-map the file
- void* ptr = mmap(nullptr, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
- if (ptr == MAP_FAILED) {
- perror("mmap");
- return 1;
- }
- // Use placement new to create a vector in the mmap'ed region
- auto* vec = new(ptr) std::vector<int, MmapAllocator<int>>();
- // Use the vector
- vec->push_back(1);
- vec->push_back(2);
- vec->push_back(3);
- std::cout << "Vector contents: ";
- for (const auto& val : *vec) {
- std::cout << val << " ";
- }
- std::cout << std::endl;
- // Clean up
- vec->~vector(); // Destroy the vector
- munmap(ptr, size);
- close(fd);
- return 0;
- }
Key Points
- Custom Allocator: The
MmapAllocator
class provides a custom allocator that you can use with STL containers. - Placement New: The
new(ptr)
syntax constructs the vector in the mmap'ed region. - Lifetime Management: Remember to manually call the destructor of the vector before unmapping the memory.
Considerations
- Performance: Using mmap can be slower for small allocations due to the overhead of managing memory pages.
- Portability: The approach may have platform-specific behavior, especially regarding file handling and mmap.
- Error Handling: Always check for errors, especially when dealing with system calls like
mmap
andopen
.
This approach provides a solid foundation for using STL containers with mmap'ed memory, but be sure to tailor it to fit your specific application needs and constraints.
Communicating fluently in English is a gradual process, one that takes a lot of practice and time to hone. In the meantime, the learning process can feel daunting: You want to get your meaning across correctly and smoothly, but putting your ideas into writing comes with the pressure of their feeling more permanent. This is why consistent, tailored suggestions are most helpful for improving your English writing abilities. Seeing specific writing suggestions based on common grammatical mistakes multilingual speakers make in English is key to improving your communication and English writing fluen
Communicating fluently in English is a gradual process, one that takes a lot of practice and time to hone. In the meantime, the learning process can feel daunting: You want to get your meaning across correctly and smoothly, but putting your ideas into writing comes with the pressure of their feeling more permanent. This is why consistent, tailored suggestions are most helpful for improving your English writing abilities. Seeing specific writing suggestions based on common grammatical mistakes multilingual speakers make in English is key to improving your communication and English writing fluency.
Regular feedback is powerful because writing in a language that isn’t the first one you learned poses extra challenges. It can feel extra frustrating when your ideas don’t come across as naturally as in your primary language. It’s also tough to put your writing out there when you’re not quite sure if your grammar and wording are correct. For those communicating in English in a professional setting, your ability to write effectively can make all the difference between collaboration and isolation, career progress and stagnation.
Grammarly Pro helps multilingual speakers sound their best in English with tailored suggestions to improve grammar and idiomatic phrasing. Especially when you’re writing for work, where time often is in short supply, you want your communication to be effortless. In addition to offering general fluency assistance, Grammarly Pro now includes tailored suggestions for writing issues common among Spanish, Hindi, Mandarin, French, and German speakers, with more languages on the way.
Features for all multilingual speakers
Grammarly’s writing suggestions will catch the most common grammatical errors that multilingual speakers make in English. For example, if you drop an article or misuse a preposition (such as “on” instead of “in”), our sidebar will flag those mistakes within the Fix spelling and grammar category with the label Common issue for multilingual speakers. Most importantly, it will provide suggestions for fixing them. While these errors seem small, one right after another can make sentences awkward and more difficult to absorb. Eliminating them all in one fell swoop is a powerful way to put a more fluent spin on your document.
Features for speakers of specific languages
With Grammarly Pro, speakers of French, German, Hindi, Mandarin, and Spanish can get suggestions specifically tailored to their primary language, unlocking a whole other level of preciseness in written English. For speakers of those languages, our sidebar will flag “false friends,” or cognates, which are words or phrases that have a similar form or sound in one’s primary language but don’t have the same meaning in English.
But now Grammarly Pro’s writing suggestions will catch these types of errors for you and provide suggestions on how to fix them. You can find these suggestions in the Sound more fluent category in our floating sidebar. Simply click on the suggestion highlighted in green, and voila, your English will be more polished and accurate.
PS: Tailored suggestions for other language backgrounds are on the way!
I can only speak for myself.
Firstly, I can’t fathom writing C++ code without STL.
I did work on a project where it was strongly discouraged. The stud programmers who had won the respect of managers were telling people STL was not fast enough. Some people argued with them, telling them that premature optimization made no sense…first it should have been proved STL was too slow and for which blocks of the code. It felt so 1990s.
Presuming someone is not standing next to you, insisting you program C++ like you would program C, you should be using all the STL you can. It’s debugged. It’s almost certa
I can only speak for myself.
Firstly, I can’t fathom writing C++ code without STL.
I did work on a project where it was strongly discouraged. The stud programmers who had won the respect of managers were telling people STL was not fast enough. Some people argued with them, telling them that premature optimization made no sense…first it should have been proved STL was too slow and for which blocks of the code. It felt so 1990s.
Presuming someone is not standing next to you, insisting you program C++ like you would program C, you should be using all the STL you can. It’s debugged. It’s almost certainly written by very smart people, who realized millions of lines of code might be relying on it, in mission critical applications…so you can rely on it being pretty damn efficient for what it does.
I constantly use vectors and maps, and don't worry about them being inefficient until proven otherwise. Sets also have their uses for me. I haven’t found much use for other stuff ( it’s obviously useful though)
And STL stuff is designed to work with algorithms. Algorithms can make for more compact, more reliable code ( again, they have been thoroughly debugged and made efficient - no need to waste time on that.) And they support range based for loops.
What’s inefficient, IMO, is trying to roll your own half baked, not documented, not fully tested alternatives.
Yes, absolutely. Much more so than in, say, student code, because “industry” programmers are far more likely to know how to use the STL*. The STL is awesome.
The STL is generally the best implementation available for the data structures it provides. You should never be writing your own vector**. You should not be writing your own sort, or reverse, or set_intersection, or make_heap. Use the STL version.
There are reasons to write your own classes. For example, Mike Acton’s points about the poor cache behavior of maps (and various other issues) are valid:
However,
Yes, absolutely. Much more so than in, say, student code, because “industry” programmers are far more likely to know how to use the STL*. The STL is awesome.
The STL is generally the best implementation available for the data structures it provides. You should never be writing your own vector**. You should not be writing your own sort, or reverse, or set_intersection, or make_heap. Use the STL version.
There are reasons to write your own classes. For example, Mike Acton’s points about the poor cache behavior of maps (and various other issues) are valid:
However, the whole point of SG14*** is to address these kinds of things, and over time I expect to see more STL in high-performance situations, not less. And most code is not trying to squeeze out a reliable 60 frames-per-second, it’s trying to produce a solution that works with minimal bugs and acceptable performance while consuming a minimum of programmer time.
For that reason, yes, we industry programmers use a lot of STL.
*I know that it’s technically the C++ standard library. But I’m writing this answer from my hotel at CPPCon, and just got out of a presentation attended by actual C++ committee members, in which they were still calling it the STL. Old names die hard.
**I have written my own vector, I had a perfectly good reason for doing so, and I still hold a deep and furious loathing in my heart for the circumstances that forced it. So I stand by my “never” even though I have personally run into exceptions. I hope to have the last of the technical mess that led to it cleaned up in the next year so that I can throw the whole thing in the damnatio memoriae bin.
***SG14 is Study Group 14, the subsection of the C++ committee that works on improving the performance of the language (especially issues around latency). It is attended by people who work on video games, high frequency trading, and embedded devices.
Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.
And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.
Don’t wait like I did. Go ahead and start using these money secrets today!
1. Cancel Your Car Insurance
You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily,
Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.
And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.
Don’t wait like I did. Go ahead and start using these money secrets today!
1. Cancel Your Car Insurance
You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily, this problem is easy to fix.
Don’t waste your time browsing insurance sites for a better deal. A company called Insurify shows you all your options at once — people who do this save up to $996 per year.
If you tell them a bit about yourself and your vehicle, they’ll send you personalized quotes so you can compare them and find the best one for you.
Tired of overpaying for car insurance? It takes just five minutes to compare your options with Insurify and see how much you could save on car insurance.
2. Ask This Company to Get a Big Chunk of Your Debt Forgiven
A company called National Debt Relief could convince your lenders to simply get rid of a big chunk of what you owe. No bankruptcy, no loans — you don’t even need to have good credit.
If you owe at least $10,000 in unsecured debt (credit card debt, personal loans, medical bills, etc.), National Debt Relief’s experts will build you a monthly payment plan. As your payments add up, they negotiate with your creditors to reduce the amount you owe. You then pay off the rest in a lump sum.
On average, you could become debt-free within 24 to 48 months. It takes less than a minute to sign up and see how much debt you could get rid of.
3. You Can Become a Real Estate Investor for as Little as $10
Take a look at some of the world’s wealthiest people. What do they have in common? Many invest in large private real estate deals. And here’s the thing: There’s no reason you can’t, too — for as little as $10.
An investment called the Fundrise Flagship Fund lets you get started in the world of real estate by giving you access to a low-cost, diversified portfolio of private real estate. The best part? You don’t have to be the landlord. The Flagship Fund does all the heavy lifting.
With an initial investment as low as $10, your money will be invested in the Fund, which already owns more than $1 billion worth of real estate around the country, from apartment complexes to the thriving housing rental market to larger last-mile e-commerce logistics centers.
Want to invest more? Many investors choose to invest $1,000 or more. This is a Fund that can fit any type of investor’s needs. Once invested, you can track your performance from your phone and watch as properties are acquired, improved, and operated. As properties generate cash flow, you could earn money through quarterly dividend payments. And over time, you could earn money off the potential appreciation of the properties.
So if you want to get started in the world of real-estate investing, it takes just a few minutes to sign up and create an account with the Fundrise Flagship Fund.
This is a paid advertisement. Carefully consider the investment objectives, risks, charges and expenses of the Fundrise Real Estate Fund before investing. This and other information can be found in the Fund’s prospectus. Read them carefully before investing.
4. Earn Up to $50 this Month By Answering Survey Questions About the News — It’s Anonymous
The news is a heated subject these days. It’s hard not to have an opinion on it.
Good news: A website called YouGov will pay you up to $50 or more this month just to answer survey questions about politics, the economy, and other hot news topics.
Plus, it’s totally anonymous, so no one will judge you for that hot take.
When you take a quick survey (some are less than three minutes), you’ll earn points you can exchange for up to $50 in cash or gift cards to places like Walmart and Amazon. Plus, Penny Hoarder readers will get an extra 500 points for registering and another 1,000 points after completing their first survey.
It takes just a few minutes to sign up and take your first survey, and you’ll receive your points immediately.
5. Get Up to $300 Just for Setting Up Direct Deposit With This Account
If you bank at a traditional brick-and-mortar bank, your money probably isn’t growing much (c’mon, 0.40% is basically nothing).
But there’s good news: With SoFi Checking and Savings (member FDIC), you stand to gain up to a hefty 3.80% APY on savings when you set up a direct deposit or have $5,000 or more in Qualifying Deposits and 0.50% APY on checking balances — savings APY is 10 times more than the national average.
Right now, a direct deposit of at least $1K not only sets you up for higher returns but also brings you closer to earning up to a $300 welcome bonus (terms apply).
You can easily deposit checks via your phone’s camera, transfer funds, and get customer service via chat or phone call. There are no account fees, no monthly fees and no overdraft fees. And your money is FDIC insured (up to $3M of additional FDIC insurance through the SoFi Insured Deposit Program).
It’s quick and easy to open an account with SoFi Checking and Savings (member FDIC) and watch your money grow faster than ever.
Read Disclaimer
5. Stop Paying Your Credit Card Company
If you have credit card debt, you know. The anxiety, the interest rates, the fear you’re never going to escape… but a website called AmONE wants to help.
If you owe your credit card companies $100,000 or less, AmONE will match you with a low-interest loan you can use to pay off every single one of your balances.
The benefit? You’ll be left with one bill to pay each month. And because personal loans have lower interest rates (AmONE rates start at 6.40% APR), you’ll get out of debt that much faster.
It takes less than a minute and just 10 questions to see what loans you qualify for.
6. Lock In Affordable Term Life Insurance in Minutes.
Let’s be honest—life insurance probably isn’t on your list of fun things to research. But locking in a policy now could mean huge peace of mind for your family down the road. And getting covered is actually a lot easier than you might think.
With Best Money’s term life insurance marketplace, you can compare top-rated policies in minutes and find coverage that works for you. No long phone calls. No confusing paperwork. Just straightforward quotes, starting at just $7 a month, from trusted providers so you can make an informed decision.
The best part? You’re in control. Answer a few quick questions, see your options, get coverage up to $3 million, and choose the coverage that fits your life and budget—on your terms.
You already protect your car, your home, even your phone. Why not make sure your family’s financial future is covered, too? Compare term life insurance rates with Best Money today and find a policy that fits.
The STL is coded pretty well as you could imagine. For most scenarios, they are fast enough and easy to use. I’d only consider writing my own if there is need for dirty optimizations in the name of maximum efficiency. As a simple example, a linked list is faster if the pointer to the next piece of datum is stored within the datum. That’s a hacky way to do it though — way clunkier, more prone to error, and less elegant. But with C++, it isn’t uncommon to go all out in the name of optimization as the language comes out to play when performance matters. I’d also naturally write my own or find a p
The STL is coded pretty well as you could imagine. For most scenarios, they are fast enough and easy to use. I’d only consider writing my own if there is need for dirty optimizations in the name of maximum efficiency. As a simple example, a linked list is faster if the pointer to the next piece of datum is stored within the datum. That’s a hacky way to do it though — way clunkier, more prone to error, and less elegant. But with C++, it isn’t uncommon to go all out in the name of optimization as the language comes out to play when performance matters. I’d also naturally write my own or find a prewritten library for a data structure unrepresented in the STL. There are some rare data structures that have advantages a programmer might need for certain situations.
There’s actually a FAQ question about this on Bjarne Stroustrup’s website, the guy who made C++, and in it, he addresses a question about the list implementation and this optimization. As you might expect, he defends the STL’s implementation as being plenty fast for most scenarios. I couldn’t find it on his website anymore, but old C++ pros telling him about this apparently warranted answering the critique.
You can’t avoid the use of raw pointers in C++.
You can hide the use of raw pointers behind smart pointer abstractions.
Most smart pointer library implementations are built on top of raw pointers.
The issue is that C++ runs on top of a CPU ISA (instruction set architecture). Virtually every ISA that I’ve ever seen has instruction-level support for raw pointers, but they generally have very limited support for smart pointers, if any.
So if you want to actually access memory, there’s a raw pointer access down at the bottom of the C++ abstraction layers.
You could implement a memory pool by doing a “p
You can’t avoid the use of raw pointers in C++.
You can hide the use of raw pointers behind smart pointer abstractions.
Most smart pointer library implementations are built on top of raw pointers.
The issue is that C++ runs on top of a CPU ISA (instruction set architecture). Virtually every ISA that I’ve ever seen has instruction-level support for raw pointers, but they generally have very limited support for smart pointers, if any.
So if you want to actually access memory, there’s a raw pointer access down at the bottom of the C++ abstraction layers.
You could implement a memory pool by doing a “placement new” of std::array<char, 0x1000000> and constructing the object on a memory range obtained from the OS using mmap.
But if you want to return smart pointers which use that array as the backing store, then at some point, you’re going through raw pointers — even if you never dereference them.
Sparingly and only as a last resort.
The C++ Standard Library offers a good collection of data structures and algorithms that fulfill 99% of normal use cases. Occasionally you may need to augment what the Standard Library offers with extra data, but that can be done easily with custom classes or even std::tuple
in a pinch. It is very rare to implement a data structure from scratch: the more esoteric data structures are rarely needed as their performance benefits are far outweighed by implementation complexity (the ‘[math]c[/math]’ in [math]|f(n)|\le|cg(n)|[/math]).
If you plan to produce code for others to use, then beco
Sparingly and only as a last resort.
The C++ Standard Library offers a good collection of data structures and algorithms that fulfill 99% of normal use cases. Occasionally you may need to augment what the Standard Library offers with extra data, but that can be done easily with custom classes or even std::tuple
in a pinch. It is very rare to implement a data structure from scratch: the more esoteric data structures are rarely needed as their performance benefits are far outweighed by implementation complexity (the ‘[math]c[/math]’ in [math]|f(n)|\le|cg(n)|[/math]).
If you plan to produce code for others to use, then become familiar with the bridge between the containers and the algorithms: the iterators. Learn how they work, how they are categorized by functionality, and how to implement them. Once you master that, writing new algorithms or new containers will be easy, and you will leverage the power of the existing Standard Library without having to reimplement everything. But more importantly, you’ll be familiar with the Standard Library to realize when — if ever — it is necessary.
Data Structures are the way by which data is stored and retrieved. Data Structures are used everywhere. By everywhere I mean where ever the technology exists. Manoeuvred use of data structures and are what make the application run effectively & efficiently.
Have you ever thought of programming without using functions? That's what we would be doing if Stack never ever existed.
Now coming to your question.
Think what motivated you the most in technology. Be specific. For me. Gaming it is Try to think of how & which data structures can be used for simple games like snakes and ladders, UNO, ludo etc.
Data Structures are the way by which data is stored and retrieved. Data Structures are used everywhere. By everywhere I mean where ever the technology exists. Manoeuvred use of data structures and are what make the application run effectively & efficiently.
Have you ever thought of programming without using functions? That's what we would be doing if Stack never ever existed.
Now coming to your question.
Think what motivated you the most in technology. Be specific. For me. Gaming it is Try to think of how & which data structures can be used for simple games like snakes and ladders, UNO, ludo etc. Arrive at a solution and ask experts if they have arrived at right solution. If not learn. Be curious. Lot of the times intelligence boils down to curiosity.
If you have learned a new language and can visualize how an application works internally. Then you have succeeded at learning.
You can also improve your knowledge by practicing questions on Data Structures from Hackerank, Codechef and many other websites.
Please up vote. Thankyou.
It’s used for two things.
- Access to the contents of a file as region of memory which can be more effective than using explicit i/o operations (read/write etc.).
- Creating shared memory between processes, with a filename as the “key” and using something like /tmp or some other tmpfs as the backing file system.
Very ambitious people will use mmap as backing store and shared memory across processes at the same time, but can be very challenging to syncronize. It’ll become more popular when Optane-like persistent memory is widely available and can be used with load/store into the mmapped region.
There is a universal answer to this universal question:
Q: Can I do (something from C) in C++?
A: Yes, but you probably shouldn’t.
C++ is a superset of C, meaning it supports everything from C, but it has its own unique features not provided by C. One of those features is data structures. C only provides structs and unions, which are not useful for handling a collection of data. The C++ standard library has a variety of “containers” available, such as lists and vectors. In addition, it also has an extensive set of algorithms that can be customized and applied to data that is held in different con
There is a universal answer to this universal question:
Q: Can I do (something from C) in C++?
A: Yes, but you probably shouldn’t.
C++ is a superset of C, meaning it supports everything from C, but it has its own unique features not provided by C. One of those features is data structures. C only provides structs and unions, which are not useful for handling a collection of data. The C++ standard library has a variety of “containers” available, such as lists and vectors. In addition, it also has an extensive set of algorithms that can be customized and applied to data that is held in different container classes.
If it’s for a school or personal learning project and you want to create C data structures to use in C++ code, it can work. But if this is a serious project, you are creating unnecessary work for yourself.
If you want to be a professional software developer, you will need to learn how data structures (and the algorithms that operate on them) are actually implemented, without the assistance of canned libraries of data structure containers and algorithms.
Then, you’ll have an understanding of what’s actually going on, how to choose and use the most appropriate containers and algorithms, etc. when you use canned libraries. You’ll have a more accurate mental model, and you’ll be more adept at using available libraries.
I recommend working through Mastering Algorithms with C, implementing as many data
If you want to be a professional software developer, you will need to learn how data structures (and the algorithms that operate on them) are actually implemented, without the assistance of canned libraries of data structure containers and algorithms.
Then, you’ll have an understanding of what’s actually going on, how to choose and use the most appropriate containers and algorithms, etc. when you use canned libraries. You’ll have a more accurate mental model, and you’ll be more adept at using available libraries.
I recommend working through Mastering Algorithms with C, implementing as many data structures and algorithms as possible from scratch, debugging them, experimenting with them, etc. This is the way to learn how they really work, by implementing them without the help of canned libraries.
The STL (or now the C++ standard library) is meant to provide best-in-class algorithms for common tasks, or at least those with reasonably good performance for most use cases. Basically, if a better algorithm existed, it would find its way into the library. So for developers, it's impractical to spend time writing code to do what is already available; the odds that they can outperform the stdlib are too small. In fact, I would argue that the standard library should be much larger; devs are still writing too much code at the application level.
The other significant advantage is having a common v
The STL (or now the C++ standard library) is meant to provide best-in-class algorithms for common tasks, or at least those with reasonably good performance for most use cases. Basically, if a better algorithm existed, it would find its way into the library. So for developers, it's impractical to spend time writing code to do what is already available; the odds that they can outperform the stdlib are too small. In fact, I would argue that the standard library should be much larger; devs are still writing too much code at the application level.
The other significant advantage is having a common vocabulary. If you're on a team and someone says "we'll use a vector here and a map there", it's convenient that everyone knows what that means.
From the operating system, and the operating system decides how to oblige.
When a computer boots, the first program that gets to execute has absolute access to all the memory. It can do whatever it wants, with whatever region of the memory it wants.
Fortunately, the first program is the operating system (unless you have a rootkit virus), and as soon as the OS boots, it marks all memory as protected. Protected memory is set in hardware, i.e., the page table sets which pages can be read, which ones can be written and which ones cannot. So initially OS locks all pages, and since it is the first pro
From the operating system, and the operating system decides how to oblige.
When a computer boots, the first program that gets to execute has absolute access to all the memory. It can do whatever it wants, with whatever region of the memory it wants.
Fortunately, the first program is the operating system (unless you have a rootkit virus), and as soon as the OS boots, it marks all memory as protected. Protected memory is set in hardware, i.e., the page table sets which pages can be read, which ones can be written and which ones cannot. So initially OS locks all pages, and since it is the first program, itself has no problem accessing anything.
Whenever you run a program via the OS, the system first unlocks a few pages, loads the program there, and starts executing it. If this program now tries to access any memory outside these initial pages, it will cause a page fault error, caused by the CPU and handled by the OS. The OS usually closes this program for bad behavior (you see a segmentation fault error on screen).
Well behaved programs on the other hand, if need access to other memory, be it for their own usage, or to inspect memory of other programs, nicely ask the OS. The OS decides exactly what to do. For example, to access memory of other programs, the OS asks the user to give this program administrative privileges (the dimmed out verification dialog in Windows), and once granted, lets it read other memory.
mmap is the system call programs need to call when they want to grab extra pages of memory to use as their own. The OS either grants them these pages (often) or rejects the request (rarely, e.g., memory is full). Before granting however, the OS marks those particular pages as read/writable, so that the program can access them.
Now comes the tricky part. The OS rapidly switches between programs and runs each for a short while (1-10 miliseconds), to make it appear that they are running in parallel. Every time the OS switches between programs, it marks all pages as inaccessible and then marks pages of that particular program as accessible. That’s how OS protects program memories from each other, and that’s one of the reasons switching between programs is expensive.
You simply cannot do that.
Why? because once you requested an array that array has fixed size.
if you later add another array, you cannot use the first array and just ask for more on the same
and you need to be changing from one and the other which means, is no longer just one array
also if you try to use the second you have no info of the first unless, you actually copy all of it
which needs a LOT of time to copy all the old array into the new and when freeing the old one, it fragments memory. (because the first array needs to be active at the same time as the second).
the best way to do that is to
You simply cannot do that.
Why? because once you requested an array that array has fixed size.
if you later add another array, you cannot use the first array and just ask for more on the same
and you need to be changing from one and the other which means, is no longer just one array
also if you try to use the second you have no info of the first unless, you actually copy all of it
which needs a LOT of time to copy all the old array into the new and when freeing the old one, it fragments memory. (because the first array needs to be active at the same time as the second).
the best way to do that is to know how much you will use and request it just once.
if not array are not the best idea.
The STL portion of the C++ standard library is broadly used in industry. The STL provides flexible and fully-debugged algorithms and data structures that cover a wide variety of needs.
Certain industries cannot adopt the STL, either because of performance limitations, or because STL data structures allocate dynamic variables, which is forbidden by the most restrictive coding standards in the automotive and aerospace industries.
STL containers are distributed as header-only templates. So, Yes, it is quite possible, because you have the source for the templates available. The STL containers are compiled as source code every time from the source code generated from the template, but with all the types filled in.
That being said, it is highly unlikely to be easy to successfully alter any of the STL containers significantly from the inside, because they are very compact and have very little leeway for change in their basic function.
A much more successful strategy would be to create composite classes that adds the functiona
STL containers are distributed as header-only templates. So, Yes, it is quite possible, because you have the source for the templates available. The STL containers are compiled as source code every time from the source code generated from the template, but with all the types filled in.
That being said, it is highly unlikely to be easy to successfully alter any of the STL containers significantly from the inside, because they are very compact and have very little leeway for change in their basic function.
A much more successful strategy would be to create composite classes that adds the functionality you desire while containing the STL class itself, unmodified. If there is something the container does not already do, such as split itself in to two parts, provide some kind of self-modification or compaction, etc., then it is better to create a class around it the implements those functions.
Writing C++ programs.
Or Java programs, C# programs, Python programs, FORTRAN programs, or any other language. Data structures are the same thing in all languages. Only the syntax varies.
As Mr Bi said, map/set covers many cases for which you would use a tree, but has a higher level of abstraction, because you don’t want explicitly to traverse the links in a tree when you iterate through a map. You just want to say ++iterator. After the amount of time spent learning about trees in Algorithms & Data Structures, it was interesting to me that they are not used much on their own.
There is a balanced binary tree data type in STL, or rather in your STL implementation. Map and set are built on it. You can read the map code and find what its name is, and instantiate it. Best of luck.
"Better" is a subjective value. It depends on how you define "better".
Passing pointers is faster than pushing values on to the function call stack or copying blocks of memory - orders of magnitude faster for large structures. So if you define faster == better, than yes.
However, because C and C++ do no bounds checking on pointer math, it can be potentially dangerous from both a stability and a security standpoint. C/C++ don't do garbage collection either, so passing pointers can be the source of memory leaks. So if you define safer == better, than no.
Personally, I am of the opinion that
"Better" is a subjective value. It depends on how you define "better".
Passing pointers is faster than pushing values on to the function call stack or copying blocks of memory - orders of magnitude faster for large structures. So if you define faster == better, than yes.
However, because C and C++ do no bounds checking on pointer math, it can be potentially dangerous from both a stability and a security standpoint. C/C++ don't do garbage collection either, so passing pointers can be the source of memory leaks. So if you define safer == better, than no.
Personally, I am of the opinion that you should only use C or C++ when speed matters. If it's not a performance-critical routine, you should be writing it in a higher-order dynamic language (Ruby, Erlang, Haskell, etc). So, if you're only using C/C++ for performance-critical code, then it follows you should be be writing the fastest C/C++ code you are capable of producing.
The data members of a C++ struct are exactly the same as in C.
C++ simply adds object oriented functions to the struct recipe to make it a full object with all the abilities of classes — but that is kept in the static memory area. The data part is interchangeable with C, and you can send C++ structs into functions, and C structs into C++ functions.
If a C++ struct is sent into a C function such as a libc standard function, then only the data part is seen. C++ class and struct objects that are members are seen as C struct members to the C function, but the data layout may not exactly match, so
The data members of a C++ struct are exactly the same as in C.
C++ simply adds object oriented functions to the struct recipe to make it a full object with all the abilities of classes — but that is kept in the static memory area. The data part is interchangeable with C, and you can send C++ structs into functions, and C structs into C++ functions.
If a C++ struct is sent into a C function such as a libc standard function, then only the data part is seen. C++ class and struct objects that are members are seen as C struct members to the C function, but the data layout may not exactly match, so don’t.
I recently wrote this function for a C++ ftp program —
- // retrieve file from server
- size_t get_ftp( ftp::Connection& conn, const char* path)
- {
- struct stat stat_buf;
- // retrieve file size in bytes
- unsigned remotefile_size = conn.size(path, BINARY);
- const char* localfile = basename(path);
- conn.get(localfile, path, BINARY);
- if ( stat(localfile, &stat_buf) == -1)
- {
- perror("stat");
- return -1;
- }
- size_t localfile_size = stat_buf.st_size;
- cerr << conn.getLastResponse() << "\n";
- if ( localfile_size == remotefile_size)
- return localfile_size;
- else
- {
- cerr << "ERROR local file and remote file differ in size\n";
- return -1;
- }
- }
The struct stat stat_buf
and the
- stat(localfile, &stat_buf)
function is straight from
- #include <sys/types.h>
- #include <sys/stat.h>
It compiles right in and calls the pre-compiled stat function in libc.
The stat function has no knowledge or ability to use the constructors, destructors, copy constructors, etc. , as it is compiled with a C compiler.
As long as none of those are called by the stat function, then everything is fine.
Now the struct stat stat_buf is a C++ struct with all the constructors and destructors, and is constructed with C++ default values, whereas in a C compiler it would have different default values (probably no defaults).
But that just makes it safer in C++ anyhow. Structs being fully supported objects, they construct with their own base functions and zeroed-out members.
In C/C++, where does mmap() request memory from?
Firstly, mmap(…)
is not a function specified in either the C or C++ language standards. This is worth note all on its own, but is especially worth note because understanding where mmap(…)
comes from also answers the question.
The mmap(…)
C function is part of the POSIX [ https://en.m.wikipedia.org/wiki/POSIX ] standard. The POSIX standard hails from b
In C/C++, where does mmap() request memory from?
Firstly, mmap(…)
is not a function specified in either the C or C++ language standards. This is worth note all on its own, but is especially worth note because understanding where mmap(…)
comes from also answers the question.
The mmap(…)
C function is part of the POSIX [ https://en.m.wikipedia.org/wiki/POSIX ] standard. The POSIX standard hails from back to the late 80’s and specifies a series of API’s for operating systems to implement to ensure application portability.
So to answer the question directly, mmap(…)
is a function your operati...
One obvious possibility would be to store a hash of each line instead of storing the complete content of the line. If you use (for example) a SHA-1 hash, you store only 160 bits (20 bytes), regardless of that line's length. If lines average less than 20 bytes/characters long, this will be a net loss, but if they're longer than that on average, it's a net gain (and typical text files average around 60-80 characters per line).
If the lines in the input are short, then a lot of your current memory usage is probably due to the overhead of creating a string for each line, and then creating a node st
One obvious possibility would be to store a hash of each line instead of storing the complete content of the line. If you use (for example) a SHA-1 hash, you store only 160 bits (20 bytes), regardless of that line's length. If lines average less than 20 bytes/characters long, this will be a net loss, but if they're longer than that on average, it's a net gain (and typical text files average around 60-80 characters per line).
If the lines in the input are short, then a lot of your current memory usage is probably due to the overhead of creating a string for each line, and then creating a node structure around that (where the "node" is what's actually stored in the hash table maintained by the unordered_map).
In this case, you can probably save a great deal of space by reading the entire input file into a single big chunk of memory. Then scan through that memory and find the beginning of each line, and build an index of of the lines as a vector of pointers to char.
Then you can sort those pointers based on the strings they point at.
Finally, walk through the index (which is already sorted) and compare each string to its predecessor and successor. If it's different from both, then you have a unique string, and you can write it out to the output.
The question is what do you mean by “modify” the STL containers? If you are referring to change the implementation of the container like having a vector implemented as a linked-list then most likely no as you would have to argue with the Standards Committee to make the change and then the standard library implementers would have to make that change. If you are talking about adding new functionalit
The question is what do you mean by “modify” the STL containers? If you are referring to change the implementation of the container like having a vector implemented as a linked-list then most likely no as you would have to argue with the Standards Committee to make the change and then the standard library implementers would have to make that change. If you are talking about adding new functionality to a container then that might be more likely to go through the Standards Committee like in the case of emplace_back with many STL container. ...
There is no best. The design of a memory pool needs to match the the actual usage for an optimal implementation. What is best for one situation may be very sub optimal in another. A memory pool optimised for a small number of large allocations might struggle with lots of small allocations. The nature of a memory pool requires using raw pointers in order to manage the memory in the pool, but you can use smart pointers with blocks of memory allocated from the pool though it does complicate things
Heap management is a level below user source code; it is a system’s function.
You can of course build a heap managing allocator (if you know how) and use that in place of the the STL default allocators. The STL is designed to allow for
“traits” of allocators and objects that might be necessary or preferred for certain applications.
std::allocator
and std::allocator_traits
can be customized to provide the desired behavior, but then you must make them stateful objects, that carry around per-instance data about their allocations.
The common way to do this is to allocate from a memory pool, so that
Heap management is a level below user source code; it is a system’s function.
You can of course build a heap managing allocator (if you know how) and use that in place of the the STL default allocators. The STL is designed to allow for
“traits” of allocators and objects that might be necessary or preferred for certain applications.
std::allocator
and std::allocator_traits
can be customized to provide the desired behavior, but then you must make them stateful objects, that carry around per-instance data about their allocations.
The common way to do this is to allocate from a memory pool, so that when a pool becomes empty, (or perhaps, almost empty), it can be released back into the main pool to provide a larger contiguous block. This is usually not heap-allocated memory though, it is most often taken directly off the stack.
One of the reasons interpreted languages are not used for critical applications is that the heap compaction process is so intrusive into the program flow. Most languages with heap compaction run it in a senior, uninterruptible thread that suspends all other threads until it completes or reaches a compaction threshold.
This is too long for safety equipment, high speed sensor data, real time guaranteed processes, or other kinds of event handlers. Look, people can not even stand for the mouse cursor to freeze for a second, much less wait 10 seconds for heap to clear and compact while playing a game.
C++ game libraries do use pool allocators for fast memory turnover, but the common feature is they allocate more than they need by a considerable margin ahead of time, and reallocate again more than they currently need when a lower bound of free memory is approached.
Compaction of allocated pointers into a memory region is not what C++ is geared for operationally. C++ really really likes to leave objects in the memory addresses they were created in, and the STL is designed to do this with its containers and algorithms.
I would suggest if you wish to avoid defragmented memory in large numbers of small objects with high turnover, that you pre-allocate memory pools and use array based containers in those pools. Then compaction consists of sorting the array, leaving allocated space in a contiguous chunk at the end of the array.
Use linked-list based containers to right-size the memory footprint, and let the system put objects in the heap where they should go. To defragment the heap, copy the list to a list in a different pool, clear the list, then copy it back in one bulk operation.
A hybrid approach may be to create sub views into a larger array, so that items stay where they are contiguously in the master array , and then are referred-to
by pointers or view_objects in smaller arrays. The objects themselves are of fixed number, but can be marked uninitialized , written over , cleared and reused or invalidated, that is recycled. This has the advantage of guaranteeing beforehand that the objects allocation will not fail during runtime. Even more hybrid is the strategy of starting off with an object pool of sufficient size for most jobs, but allowing for more if (unlikely) you need more. Hash Tables often do this, as performance is optimized up to a certain size for 95% of uses, then reallocation is done by resizing the table by 1.5x to 2x.
All this compaction is expensive and takes time away from doing work; that is the nature of memory compaction. You should try to avoid it when you can, and postpone it to convenient times (low traffic, after calculations finish) when you must.
Both. I'd start by learning the C++ standard library (formerly known as STL) containers (or Java or C# collections, depending on your favorite language). Learn how to use them, their guarantees about exceptions and asymptotic time complexity down.
Then I'd start trying to implement my own versions of them, exposing the same interface as the official version.
As a programmer, you'll rarely be making your own data structures from scratch: you'll be using the out-of-the-box containers offered by your languages standard library and occasionally expanding upon them or limiting their use in your own s
Both. I'd start by learning the C++ standard library (formerly known as STL) containers (or Java or C# collections, depending on your favorite language). Learn how to use them, their guarantees about exceptions and asymptotic time complexity down.
Then I'd start trying to implement my own versions of them, exposing the same interface as the official version.
As a programmer, you'll rarely be making your own data structures from scratch: you'll be using the out-of-the-box containers offered by your languages standard library and occasionally expanding upon them or limiting their use in your own special purpose versions thereof. Learning data structures should always be done in a manner confirmant with the standard library of your chosen languages offering thereof. You'll be killing multiple birds with a single strive that way.
I would say std::valarray
, if you count that. It's the C++ version of the vector types you would typically find in languages and libraries designed for numeric computation, such as MATLAB. It's rarely used in C++, for reasons unknown to me.
You can but I dont think that’s the best way to do things. As stated by other answers, C++ is the superset of C. It supports everything that you can possibly do in C and can do even more on top of that.
John Patek’s answer to this is something that is quite relevant when you actually try to do something like this. If you want to try it out just for your curiosity then you can go ahead with it. For anything serious like a project or for the purpose of an organisation, this is not the ideal way of doing things.
Actually there are are tree data structures available in C++ stl. Only problem is that they are not called as a "tree" but as a set or a map. Set and maps are internally implemented as balanced tree's (R&B tree )which grant them the almost exact classic tree complexities and features. Beware that unordered_set and unordered_map are not implemented as a tree but as a quasi hash-map.
I agree with Malleus Veritas's answer to Is it better to pass pointers to data structures to a function in C++ and why? and would add this about speed:
If your function needs a number of arguments equal to or less than the number of CPU registers available for arguments then it's faster to pass the values instead of a pointer. In C++ "this" will consume an argument register, so you'll have 1 less for your own arguments. If you don't know how many registers your target platform will have available for arguments, it's pretty safe to assume 4 registers. If you have more than 4 arguments, passing
I agree with Malleus Veritas's answer to Is it better to pass pointers to data structures to a function in C++ and why? and would add this about speed:
If your function needs a number of arguments equal to or less than the number of CPU registers available for arguments then it's faster to pass the values instead of a pointer. In C++ "this" will consume an argument register, so you'll have 1 less for your own arguments. If you don't know how many registers your target platform will have available for arguments, it's pretty safe to assume 4 registers. If you have more than 4 arguments, passing a pointer will be much faster.
C++ is one of the best languages to implement data structures. In my opinion, its a good practice to implement data structures and basic algorithms to get a feel of it. Once you are done with a very general implementations of data structures, you can put them inside some headers and use them whenever you want.
This is what I did and I think it is worthwhile.