Really what you're talking about is a synthesizer. Such things exist, they consist of a computer that is able to reproduce the sounds of any instruments (whether sampled or synthesized), and at attached input system, which is normally a piano keyboard. The amount of memory, and processing power needed is not huge by today's standards.
But, I guess you won't be satisfied just to hear that what you've described has largely been achieved with every-day hardware. If you want to hear what the future of musical instruments is, you should consider this: the challe
Really what you're talking about is a synthesizer. Such things exist, they consist of a computer that is able to reproduce the sounds of any instruments (whether sampled or synthesized), and at attached input system, which is normally a piano keyboard. The amount of memory, and processing power needed is not huge by today's standards.
But, I guess you won't be satisfied just to hear that what you've described has largely been achieved with every-day hardware. If you want to hear what the future of musical instruments is, you should consider this: the challenge today isn't about the processing capabilities of an electronic instrument: today's computer technology is more than capable of handling the reproduction of musical instrument sounds. The challenge today is about designing an input system that can allow live performers the biggest possible gamut of musical expression possible. The problem of keyboards is that there's no way to produce natural vibrato, bend individual notes, or adjust the dynamics of a held note.
A couple of new instruments have been invented that seek to provide performers with much more flexibility of expression. But it should be mentioned that such instruments are essentially a combination of a computer and an input system. While theyr are a substantial achievement to produce, the fact is that they don't need the "enormous processing power" you mention, nor scale of internet collaboration that you are suggesting.
The Eigenharp by Eigenlabs
the Seaboard by Roli
There are some really great other videos on youtube, you should check them out.
Note: I'm an engineer, but also classically trained musician. I used to like to machine my own (eastern) flutes out of aluminium tubing. Being an electrical engineer and musician, this stuff fascinates me.
The best way to find the right freelancer for digital marketing is on Fiverr. The platform has an entire category of professional freelancers who provide full web creation, Shopify marketing, Dropshipping, and any other digital marketing-related services you may need. Fiverr freelancers can also do customization, BigCommerce, and Magento 2. Any digital marketing help you need just go to Fiverr.com and find what you’re looking for.
It can’t - not yet. Maybe not ever. There are some great sound samples out there, but they don’t really sound exactly like the real thing, and they don’t feel the same either.
I’m looking at purchasing an electronic keyboard right now. My old Casio needs updating, badly. I've been playing it since 1992 (can you imagine!). Hats off to Casio, by the way, for making a keyboard that had some serious longevity. I've been hauling it back and forth to the theatre for shows for years, but it’s 61 keys and doesn't have weighted keys. Some of the sounds are pretty great and some are terrible. But I need
It can’t - not yet. Maybe not ever. There are some great sound samples out there, but they don’t really sound exactly like the real thing, and they don’t feel the same either.
I’m looking at purchasing an electronic keyboard right now. My old Casio needs updating, badly. I've been playing it since 1992 (can you imagine!). Hats off to Casio, by the way, for making a keyboard that had some serious longevity. I've been hauling it back and forth to the theatre for shows for years, but it’s 61 keys and doesn't have weighted keys. Some of the sounds are pretty great and some are terrible. But I need a new keyboard to use in productions. I've got Joseph & The Amazing Technicolor Dreamcoat coming up this summer and I think I should replace mine before then.
At any rate, I've been looking at a lot of electronic keyboards and sample sounds carefully for several months as I consider this purchase, and two things have held me back from dropping several thousand dollars on a decent stage keyboard - the awful quality of synthetic instrument sounds, and the weird feeling of keys. The latter is unimportant to your question, as you’d like to replace musicians entirely, but the former is quite critical, so I’ll walk you through just how inadequate the sound libraries are.
Let’s just consider synthetic/patched sounds right now. Here’s the Royal Grand 3D from the Nord library.
This beautiful tone is sampled off a Steinway Model D and the recording played by a real pianist. Here’s another Steinway Model D sample - this time from Synthology. Again, a real pianist, Volker Rogall plays this.
Frederic Chopin - Etude Opus 25, No.11 in A Minor
Listen to that in full, then listen to this recording of Olivier Korber playing Chopin’s Etude Opus 25, No 11 in A Minor:
This is a live recording on an actual Steinway D of the same piece.
As beautiful and amazing as the sampling is, there’s a resonant depth to the tone of the real instrument that the sampling simply cannot replicate. Now I’ll let you in on a secret - pianos and organs are the instruments that computers are best able to replicate in sound!
This is supposed to be a cello. That sound is simply offensive. To compare, listen to Sebastian Bäverstam playing Zoltán Kodály, Sonata in B minor for solo cello, Op.8, mvt. III,
Nord thinks a French Horn sounds like this. If it seems like I’m picking on Nord - I’m not. They’re open about sharing their libraries, and the libraries are pretty good for synthetic libraries.
Here’s Sarah Willis of the Berliner Philharmoniker demonstrating the incredible depth of sound a French horn actually makes:
Perhaps synthetic reeds are better? Oboe?
https://www.nordkeyboards.com/sites/default/files/sample_libraries_3/mp3/Oboe_KG%20mono%203.0.mp3
Not a chance. My roommate in college played Oboe and I think she’s curled up in a ball somewhere crying after listening to that. Here’s a recording of Henrik Chaim Goldschmidt playing "Gabriel's Oboe":
Synthetic sounds just don’t get it right. That beautiful emotive quiver in the oboe from the two reeds vibrating together are entirely lost. The lovely fullness is gone and you’re left with the worst caricature of an oboe possible.
All this - and we haven’t gotten to the problem of how a computer plays music. A computer plays music precisely, as a metronome. A musician does not do this. Even when playing as a group, musicians make tiny adjustments to rhythm and tempo together that create the feeling of the piece, the emotional quality.
Listen to this pretty good rendition of Gustav Mahler’s Symphony No. 6 created by Curtis Allen
Now listen to the same piece performed by the Sinfónica de Galicia
The difference would be even more obvious in person because of losses due to compression on YouTube. The sound in person of an instrument is indescribably rich; the musicality is still irreplicable. Is it possible some day it may be ? I suppose it is possible, but I doubt it. Think of the difference in a factory-produced frozen hamburger patty and a fine artistic hamburger from a top chef. There are things electronic instruments do very, very well. Replicating the artistry of a musician is not one of them. They are a tool for musicians to use and master, a supplement rather than a replacement.
A bunch of good answers to the question, but I had a slightly different tack on it … we learn and play real instruments because it’s easier to learn an instrument (almost any instrument) than it is to get a precise replica of the sound made by an instrument to come from a computer. Trust me, I work with sampled instruments all the time, with the goal of making the result sound as much like a real instrument as possible. And you know what? It’s damn hard to do! Even the best sampled instrument libraries require a huge amount of tweaking and playing around with instrument parameters, to make the
A bunch of good answers to the question, but I had a slightly different tack on it … we learn and play real instruments because it’s easier to learn an instrument (almost any instrument) than it is to get a precise replica of the sound made by an instrument to come from a computer. Trust me, I work with sampled instruments all the time, with the goal of making the result sound as much like a real instrument as possible. And you know what? It’s damn hard to do! Even the best sampled instrument libraries require a huge amount of tweaking and playing around with instrument parameters, to make them sound even remotely realistic. I played the clarinet in high school band, and as much work as it was to learn how to manage the fingerings, control the reed properly with embouchure so you don’t squawk like a dying whooping crane; as much work as that was, it was still simpler than the total amount of work that went into the somewhat realistic clarinet solo that I just got to come out of my computer speakers. And I’m not even taking into consideration the ridiculous amount of programming work that went into the creation of that software instrument (recording hundreds of samples from somebody who plays the clarinet a lot better than I did, different notes at different dynamics, managing the unique note-on and note-off samples, legato snd staccato transitions between notes, and on and on…) or the creation of the DAW (Logic Pro in my case) that lets me play the clarinet with my MIDI keyboard (which, by the way, I’m comfortable playing because of 18+ years learning the piano at the Royal Conservatory… let’s not forget that as well). All of that time and effort, by hundreds of people, just so I can write a tune that sounds approximately like a human clarinettist played it.
Honestly, it is just simpler to learn the clarinet.
“We” don’t, at least not for that reason.
Musicality, the ability to play a musical instrument, is only about 20% prevalent in the general population. Almost anyone can learn the mechanics of an instrument (there is a high school in KS that has a 200 person band, there are 240 students in the school and it is a matter of pride for the town), but to make music is a little different.
Musicians at a level beyond beginner have a passion for playing. It’s not just a job or a requirement to get into college for them. Most of these musicians find an instrument or school of instruments/styles and explor
“We” don’t, at least not for that reason.
Musicality, the ability to play a musical instrument, is only about 20% prevalent in the general population. Almost anyone can learn the mechanics of an instrument (there is a high school in KS that has a 200 person band, there are 240 students in the school and it is a matter of pride for the town), but to make music is a little different.
Musicians at a level beyond beginner have a passion for playing. It’s not just a job or a requirement to get into college for them. Most of these musicians find an instrument or school of instruments/styles and explore it.
There are those who want to develop music digitally, a group that may or may not overlap those with musicality. The enjoyment of music is almost universal but the need to play an instrument is not. Composition on a computer is not playing an instrument, per se, and it requires a different skill set than a violinist.
The strong desire to play music or make music (which could be two different things) dictate why someone would continue to try and learn. If you don’t expose someone to learning an instrument (especially children), then they can never find out if there is a talent for playing music. The vast majority of children exposed to third grade music classes never go on to play an instrument later on, but some do.
That latter fact is the main reason why children should be exposed to playing music or a host of other experiences. The earlier they try, the more likely that it will happen if the desire and talent are present. (The same is true of computers, I suspect, but they have a much more universal application so the skills are almost mandatory these days.)
That’s why you should at least try an instrument at some time in your life.
Where do I start?
I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.
Here are the biggest mistakes people are making and how to fix them:
Not having a separate high interest savings account
Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.
Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.
Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of th
Where do I start?
I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.
Here are the biggest mistakes people are making and how to fix them:
Not having a separate high interest savings account
Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.
Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.
Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of the biggest mistakes and easiest ones to fix.
Overpaying on car insurance
You’ve heard it a million times before, but the average American family still overspends by $417/year on car insurance.
If you’ve been with the same insurer for years, chances are you are one of them.
Pull up Coverage.com, a free site that will compare prices for you, answer the questions on the page, and it will show you how much you could be saving.
That’s it. You’ll likely be saving a bunch of money. Here’s a link to give it a try.
Consistently being in debt
If you’ve got $10K+ in debt (credit cards…medical bills…anything really) you could use a debt relief program and potentially reduce by over 20%.
Here’s how to see if you qualify:
Head over to this Debt Relief comparison website here, then simply answer the questions to see if you qualify.
It’s as simple as that. You’ll likely end up paying less than you owed before and you could be debt free in as little as 2 years.
Missing out on free money to invest
It’s no secret that millionaires love investing, but for the rest of us, it can seem out of reach.
Times have changed. There are a number of investing platforms that will give you a bonus to open an account and get started. All you have to do is open the account and invest at least $25, and you could get up to $1000 in bonus.
Pretty sweet deal right? Here is a link to some of the best options.
Having bad credit
A low credit score can come back to bite you in so many ways in the future.
From that next rental application to getting approved for any type of loan or credit card, if you have a bad history with credit, the good news is you can fix it.
Head over to BankRate.com and answer a few questions to see if you qualify. It only takes a few minutes and could save you from a major upset down the line.
How to get started
Hope this helps! Here are the links to get started:
Have a separate savings account
Stop overpaying for car insurance
Finally get out of debt
Start investing with a free bonus
Fix your credit
New musical instruments are being invented all the time.
This is from 2017, it makes a wide variety of sounds and is designed to be played live
New musical instruments are being invented all the time.
This is from 2017, it makes a wide variety of sounds and is designed to be played live
Traditional synthesizers can only produce rough approximations of real instruments. Nobody is going to be convinced by wave form manipulation that they are actually hearing a piano or a violin.
However, what computers can do very well, as you probably know, is play back audio. The most realistic instrument patches are produced from recordings of real instruments.
Patches for keyboard instruments tend to be the best—especially piano patches. This is partially because the piano doesn’t produce as wide a range of sounds for each note and partially because most keyboard players are also piano player
Traditional synthesizers can only produce rough approximations of real instruments. Nobody is going to be convinced by wave form manipulation that they are actually hearing a piano or a violin.
However, what computers can do very well, as you probably know, is play back audio. The most realistic instrument patches are produced from recordings of real instruments.
Patches for keyboard instruments tend to be the best—especially piano patches. This is partially because the piano doesn’t produce as wide a range of sounds for each note and partially because most keyboard players are also piano players and have very high standards for synth piano sounds (but for some reason don’t seem to care that the strings sound totally fake). There is still additional digital sound processing on most good piano modules to simulate things like resonance and the sympathetic vibrations on undampened strings.
Sample patches can also do fairly good imitations of other keyboard instruments like organs and harpsichords, as well as classic analog electric keyboards like a Rhodes piano or Hammond organ.
When it really starts to break down a bit is when it comes to instruments where the performer has more direct control over sound production and scores of different articulations are possible for each note. There are about 25 different ways to play one note on the violin. Let’s say a good violinist can play about 45 different notes on the violin (not including harmonics). There are about 9 different dynamic levels that are marked in most music maybe you don’t sample every single one, but let’s get at least 5. So we’re at 5,625 samples for your violin patch.
But wait! Some articulations involve not one pitch, but two or more! trills, glissandi, portamento, etc. For these kinds of effects, you theoretically have to get a recording of transitioning between any two notes on the violin, and if you know anything about math, you can see we are headed for catastrophic combinatorial explosion.
Now, the best sample libraries for strings do indeed contain thousands of sampled sounds per instrument, but they also cheat and fill in some of the gaps digitally for cases where it would be impractical to get samples of every articulation for every possible pitch.
So basically the state of things is that you can often get pretty close to the sound of a real instrument, but for the most expressive instruments, there’s always something a little “off”—especially for solo instruments. Instrument sections loose a little of the distinct quality of each note even with real instruments, so sections of instruments can sound a little more realistic than solos.
So interestingly enough, it’s easier to produce a realistic sounding “fake” orchestra track than a string quartet or saxophone solo—cases where you really hear the nuance of each instrument. Many video game and television scores these days will use sampled sounds for the orchestra and may only hire a few musicians to play solo parts—and in many cases, you have to listen very closely to be able to recognize that you’re not hearing a real orchestra. A fake violin solo, however, almost anyone will be able to hear. There are patch libraries which make it possible to get very close to the real thing, but using them to create realistic sounds also takes an incredible amount of time and training—and the “uncanny valley” is still there.
And in terms of real-time performance—i.e. a keyboard player trying to produce a a realistic violin solo live, it’s not going to work. There are just too many parameters to control at once on the keyboard. Professional music production involves editing the MIDI quite a bit to get exactly the articulations you want for each note.
Here’s an example of someone using a high quality violin patch to play the violin on the keyboard. It sounds very good—much better than how similar patches sounded 20 years ago—but you can still hear in some of the note transitions that it’s just not quite real.
The piano sounds friggin’ great, though.
So that’s basically where we’re at. You can get pretty good instrument sounds out of a computer these days—not using waveform synthesis, but using sample patches. Synthesizers can’t really do this kind of thing.
Still, even the best sample patches lack something when it comes to realism.
Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.
And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.
Don’t wait like I did. Go ahead and start using these money secrets today!
1. Cancel Your Car Insurance
You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily,
Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.
And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.
Don’t wait like I did. Go ahead and start using these money secrets today!
1. Cancel Your Car Insurance
You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily, this problem is easy to fix.
Don’t waste your time browsing insurance sites for a better deal. A company called Insurify shows you all your options at once — people who do this save up to $996 per year.
If you tell them a bit about yourself and your vehicle, they’ll send you personalized quotes so you can compare them and find the best one for you.
Tired of overpaying for car insurance? It takes just five minutes to compare your options with Insurify and see how much you could save on car insurance.
2. Ask This Company to Get a Big Chunk of Your Debt Forgiven
A company called National Debt Relief could convince your lenders to simply get rid of a big chunk of what you owe. No bankruptcy, no loans — you don’t even need to have good credit.
If you owe at least $10,000 in unsecured debt (credit card debt, personal loans, medical bills, etc.), National Debt Relief’s experts will build you a monthly payment plan. As your payments add up, they negotiate with your creditors to reduce the amount you owe. You then pay off the rest in a lump sum.
On average, you could become debt-free within 24 to 48 months. It takes less than a minute to sign up and see how much debt you could get rid of.
3. You Can Become a Real Estate Investor for as Little as $10
Take a look at some of the world’s wealthiest people. What do they have in common? Many invest in large private real estate deals. And here’s the thing: There’s no reason you can’t, too — for as little as $10.
An investment called the Fundrise Flagship Fund lets you get started in the world of real estate by giving you access to a low-cost, diversified portfolio of private real estate. The best part? You don’t have to be the landlord. The Flagship Fund does all the heavy lifting.
With an initial investment as low as $10, your money will be invested in the Fund, which already owns more than $1 billion worth of real estate around the country, from apartment complexes to the thriving housing rental market to larger last-mile e-commerce logistics centers.
Want to invest more? Many investors choose to invest $1,000 or more. This is a Fund that can fit any type of investor’s needs. Once invested, you can track your performance from your phone and watch as properties are acquired, improved, and operated. As properties generate cash flow, you could earn money through quarterly dividend payments. And over time, you could earn money off the potential appreciation of the properties.
So if you want to get started in the world of real-estate investing, it takes just a few minutes to sign up and create an account with the Fundrise Flagship Fund.
This is a paid advertisement. Carefully consider the investment objectives, risks, charges and expenses of the Fundrise Real Estate Fund before investing. This and other information can be found in the Fund’s prospectus. Read them carefully before investing.
4. Earn Up to $50 this Month By Answering Survey Questions About the News — It’s Anonymous
The news is a heated subject these days. It’s hard not to have an opinion on it.
Good news: A website called YouGov will pay you up to $50 or more this month just to answer survey questions about politics, the economy, and other hot news topics.
Plus, it’s totally anonymous, so no one will judge you for that hot take.
When you take a quick survey (some are less than three minutes), you’ll earn points you can exchange for up to $50 in cash or gift cards to places like Walmart and Amazon. Plus, Penny Hoarder readers will get an extra 500 points for registering and another 1,000 points after completing their first survey.
It takes just a few minutes to sign up and take your first survey, and you’ll receive your points immediately.
5. Get Up to $300 Just for Setting Up Direct Deposit With This Account
If you bank at a traditional brick-and-mortar bank, your money probably isn’t growing much (c’mon, 0.40% is basically nothing).
But there’s good news: With SoFi Checking and Savings (member FDIC), you stand to gain up to a hefty 3.80% APY on savings when you set up a direct deposit or have $5,000 or more in Qualifying Deposits and 0.50% APY on checking balances — savings APY is 10 times more than the national average.
Right now, a direct deposit of at least $1K not only sets you up for higher returns but also brings you closer to earning up to a $300 welcome bonus (terms apply).
You can easily deposit checks via your phone’s camera, transfer funds, and get customer service via chat or phone call. There are no account fees, no monthly fees and no overdraft fees. And your money is FDIC insured (up to $3M of additional FDIC insurance through the SoFi Insured Deposit Program).
It’s quick and easy to open an account with SoFi Checking and Savings (member FDIC) and watch your money grow faster than ever.
Read Disclaimer
5. Stop Paying Your Credit Card Company
If you have credit card debt, you know. The anxiety, the interest rates, the fear you’re never going to escape… but a website called AmONE wants to help.
If you owe your credit card companies $100,000 or less, AmONE will match you with a low-interest loan you can use to pay off every single one of your balances.
The benefit? You’ll be left with one bill to pay each month. And because personal loans have lower interest rates (AmONE rates start at 6.40% APR), you’ll get out of debt that much faster.
It takes less than a minute and just 10 questions to see what loans you qualify for.
6. Lock In Affordable Term Life Insurance in Minutes.
Let’s be honest—life insurance probably isn’t on your list of fun things to research. But locking in a policy now could mean huge peace of mind for your family down the road. And getting covered is actually a lot easier than you might think.
With Best Money’s term life insurance marketplace, you can compare top-rated policies in minutes and find coverage that works for you. No long phone calls. No confusing paperwork. Just straightforward quotes, starting at just $7 a month, from trusted providers so you can make an informed decision.
The best part? You’re in control. Answer a few quick questions, see your options, get coverage up to $3 million, and choose the coverage that fits your life and budget—on your terms.
You already protect your car, your home, even your phone. Why not make sure your family’s financial future is covered, too? Compare term life insurance rates with Best Money today and find a policy that fits.
This is the theremin
It uses magnetic fields and a metal rod to generate music. If someone can do this, I’m pretty sure the possibilities are wide open
:
This is the theremin
It uses magnetic fields and a metal rod to generate music. If someone can do this, I’m pretty sure the possibilities are wide open
:
To sythesyze the waveform of a musical instrument required a pretty detailed model of the physics of that instrument. Each type of instrument will have and attack, sustain, and decay behavior. The harmonic spectrum changes over time. Some cheap synthesizers might have the right spectral elements in them, but not the details of the time behavior. This is only the beginning. There is also effects due to materials that would not be in a simple model but would show up in an FFT of a real instrument. I think it is very possible to design s/w that produces realistic instrument sounds. I model such t
To sythesyze the waveform of a musical instrument required a pretty detailed model of the physics of that instrument. Each type of instrument will have and attack, sustain, and decay behavior. The harmonic spectrum changes over time. Some cheap synthesizers might have the right spectral elements in them, but not the details of the time behavior. This is only the beginning. There is also effects due to materials that would not be in a simple model but would show up in an FFT of a real instrument. I think it is very possible to design s/w that produces realistic instrument sounds. I model such things frequently and they sound good to me. But perhaps I’m too proud of myself. But to develop such a sophisticated model is expensive and not worth it for most people. Those that do it would not likely share it. I would say there is no “best algorithm” for each instrument. If you understand the physics of the instrument you should be able to develop the model. That is really an example if multi-physics simulation and not simple algorithm development. In contrast, you could do a temporal spectrogram of sampled data of an instrument, extract its properties and try to scale it to other notes, and similar instruments, e.g. clarinet scaled to a saxophone, etc. But that would eventually fail too and is difficult for people to do without the right equipment.
AI effectiveness depends on relevant, responsible and robust data to prevent costly errors, inefficiencies, and compliance issues. A solid data foundation allows AI models to deliver precise insights and ensures systems comply with regulations and protect brand reputation.
Gartner® finds that "At least 30% of generative AI (GenAI) projects will be abandoned after proof of concept by the end of 2025, due to poor data quality, inadequate risk controls, escalating costs, or unclear business value." High-quality, AI-ready data is the fuel for AI-driven advancements now and in the future.
AI effectiveness depends on relevant, responsible and robust data to prevent costly errors, inefficiencies, and compliance issues. A solid data foundation allows AI models to deliver precise insights and ensures systems comply with regulations and protect brand reputation.
Gartner® finds that "At least 30% of generative AI (GenAI) projects will be abandoned after proof of concept by the end of 2025, due to poor data quality, inadequate risk controls, escalating costs, or unclear business value." High-quality, AI-ready data is the fuel for AI-driven advancements now and in the future.
Possible to do it? Absolutely. Possible to do it on purpose? Much less likely.
Creating an instrument which will be commonly used in the future will require at least two (ideally all) of the following:
- Playability - it needs to be something which a person can easily put their hands and/or mouth on and get at least a very simple rhythm or tune out of. It doesn’t have to be easy to achieve mastery, but taking the first steps would have to be at least as easy as the instruments which are common today.
- Affordability - if you want it to be commonly used, it needs to be something which people (and scho
Possible to do it? Absolutely. Possible to do it on purpose? Much less likely.
Creating an instrument which will be commonly used in the future will require at least two (ideally all) of the following:
- Playability - it needs to be something which a person can easily put their hands and/or mouth on and get at least a very simple rhythm or tune out of. It doesn’t have to be easy to achieve mastery, but taking the first steps would have to be at least as easy as the instruments which are common today.
- Affordability - if you want it to be commonly used, it needs to be something which people (and schools) can fit into their budget. A million things compete for people’s attention nowadays, and as such they’re not likely to buy an expensive instrument if it’s going to end up being a dust-gatherer. But they may buy something if it’s inexpensive and therefore low-risk.
- Uniqueness - most of the instruments which are commonly played today have existed in some form for a century or more. They have been refined for playability and tone. They already work. To create an instrument which will be commonly used in the future, you need an idea which is significantly different from what’s already out there, but ideally can be used alongside the existing instruments in a band or other ensemble.
A good example of the first two is the recent surge in popularity of the ukulele - picking out one’s first simple melody or chord progression on a ukulele is not difficult, there is a low barrier to entry. You can also find a ukulele which is serviceable for a beginner at a price point under $50 US. It’s not by any means a unique instrument, being very much a member of the same family as the guitar, banjo, mandolin, balalaika, bouzouki, dulcimer and so on. But it has playability and affordability.
Almost all non-digital instruments fall into one or more of a few categories:
Chordophones (producing sound through the vibration of a string)
Aerophones (producing sound through the passage of air through the instrument)
Membranophones (producing sound through the vibration of a struck surface)
Idiophones (producing sound on their own by being shaken or hit)
My suggestion would be to determine what category of instrument you want to create, figure out a good balance of playability, affordability and uniqueness… and then make a prototype and hand it to a total stranger with no instructions other than ‘it’s a musical instrument, see if you can play it’. If they can get a very simple melody to happen within a few minutes, and enjoy the process of doing so, you might just have something.
So, short answer is yes, sort of. Some are better than others.
Yamaha has a fairly interesting digital saxophone. And Roland and some others have a “digital instrument” that doesn’t quite look like any normal instrument.
A simple internet search will bring up lots of lovely pictures and links for purchasing if you desire.
Digital drum sets have existed for years. And all the digital workstations for music production have drum patches and loops that don’t even need any hardware at all.
And then there’s MIDI. You can synthesize 100s of instruments with MIDI sound libraries. Our keyboard has over 600
So, short answer is yes, sort of. Some are better than others.
Yamaha has a fairly interesting digital saxophone. And Roland and some others have a “digital instrument” that doesn’t quite look like any normal instrument.
A simple internet search will bring up lots of lovely pictures and links for purchasing if you desire.
Digital drum sets have existed for years. And all the digital workstations for music production have drum patches and loops that don’t even need any hardware at all.
And then there’s MIDI. You can synthesize 100s of instruments with MIDI sound libraries. Our keyboard has over 600 built in.
Most synthesizers have sound patches for all sorts of instruments, some much closer to the acoustic instruments than others …
I see your point. If women can simulate orgasm, what’s the point of trying to satisfy them?…
But here’s the thing: not all women can simulate; and some of those simulations are really-really bad. And some are too perfect, and always the same.
The same goes for music. I play music on real instruments for four decades now, and I started making music on computers over three decades ago. In 2003 I even sold five hours of “elevator music” made entirely on computer to a chain of malls.
But I wasn’t extremely proud of that music. It was too “perfect”, even with the added imperfections. It felt fake. Y
I see your point. If women can simulate orgasm, what’s the point of trying to satisfy them?…
But here’s the thing: not all women can simulate; and some of those simulations are really-really bad. And some are too perfect, and always the same.
The same goes for music. I play music on real instruments for four decades now, and I started making music on computers over three decades ago. In 2003 I even sold five hours of “elevator music” made entirely on computer to a chain of malls.
But I wasn’t extremely proud of that music. It was too “perfect”, even with the added imperfections. It felt fake. You know, like those fake orgasms you can see and hear in porn movies. I actually felt relieved a decade later, when they changed the music.
I already gave this as an example to a similar question, but here it is again:
A more recent example, My One and Only Love by Guy Wood and Robert Mellin, a work still in progress. I know exactly what I would like to hear, and I am pretty close, but not quite there yet:
The total editing time of the file is over 120 hours, and I expect to reach my goal before 200. That if I continue working on it. But then I will have a perfect version…
A good saxophone player with a small band will play it better in a few hours. Maybe less than an hour. They will play a slightly different version every time, and if they are good enough, every version will be a different echo of perfection. I think this justifies the further use of real instruments…
Yes! Without a doubt! However, mainstreaming that instrument would be the hard part. The steel drum is fairly new in the grand scheme of things, but they haven’t fully been integrated into a whole lot (yet!).
New things are being tried every day, but just like the thousands of instruments before, very few of them ever come to fruition.
I’d say the world of percussion is the fastest and most easily growing group of instruments. New sounds and shapes are always being invented and being sold by the big companies. Many inventions are touch ups on old designs. Improvements being made one piece at a t
Yes! Without a doubt! However, mainstreaming that instrument would be the hard part. The steel drum is fairly new in the grand scheme of things, but they haven’t fully been integrated into a whole lot (yet!).
New things are being tried every day, but just like the thousands of instruments before, very few of them ever come to fruition.
I’d say the world of percussion is the fastest and most easily growing group of instruments. New sounds and shapes are always being invented and being sold by the big companies. Many inventions are touch ups on old designs. Improvements being made one piece at a time. Last time I checked, Hildegard of Bingen didn’t have any saxophones to work with.
Hope this helps!
I think it can be done now, if someone wanted to undertake the financial burden. The only limiting technical issue is that one would have to also design in a multitrack mixer board capability as well. Another cool feature would to have the device change the singer's voice into musical instruments. The thing would be beastly expensive to develop. It would need to be fairly large, and require cooling that wouldn't interfere with the music with fan noise.
If every instrument is synthesized, it would require a great many DSPs. Sampling specific instruments would add refined tonal quality, espe
I think it can be done now, if someone wanted to undertake the financial burden. The only limiting technical issue is that one would have to also design in a multitrack mixer board capability as well. Another cool feature would to have the device change the singer's voice into musical instruments. The thing would be beastly expensive to develop. It would need to be fairly large, and require cooling that wouldn't interfere with the music with fan noise.
If every instrument is synthesized, it would require a great many DSPs. Sampling specific instruments would add refined tonal quality, especially if one could sample instruments such as Stradivarius. One more musically inclined would have to write that specification ;-)
Do not assume that “generating realistic instrument sounds” involve only algorithms.
The algorithms you speak of is used for full synthesis, meaning you take basic sound waves and combine and process them to make them sound a certain way which they did not before. For example, that algorithm you linked allows an otherwise flat sounding sound wave to have an attack or transient that is more string-like.
However, modern virtual instruments use samples of real instruments being played and recorded, and a lot of the algorithms used now are not to generate the basic tone, but to trigger the correct t
Do not assume that “generating realistic instrument sounds” involve only algorithms.
The algorithms you speak of is used for full synthesis, meaning you take basic sound waves and combine and process them to make them sound a certain way which they did not before. For example, that algorithm you linked allows an otherwise flat sounding sound wave to have an attack or transient that is more string-like.
However, modern virtual instruments use samples of real instruments being played and recorded, and a lot of the algorithms used now are not to generate the basic tone, but to trigger the correct tonal response when certain playing events occur. For example, picking the right sample from the correct string when a certain note is played, because it would be physically impossible to play the same note on a different string on a real instrument - unlike straight ahead sampling where this is not a concern. There are also such things as the effect of a succeeding note muting and changing or blending with the tone of a previous one. For example, a cymbal hit twice in succession sounds different from a sample triggered twice in succession - the latter sounds like two cymbals hit one after the other.
So no, we have not solved the problem, not by a long shot. We still cannot generate realistic instrument tones. But what we have done is we are now able to sample and lay out real instruments as virtual instruments by means of combining samples, algorithms, and other computing techniques.
> If computer can simulate any sound and music instruments, what's the point of practicing instruments?
“What’s the point of learning to draw when we have video cameras?”
It actually can’t, at least not for sane amount of money.
A single instrument has incredible amount of subtleties to it none of which are properly represented in computer simula… actually that’s not even a simulation, but a synthesizer.
What most computer synthesizers do is that they have a recording of a live instruments, and then basically stitch those recordings together.
This is kinda as if you were asked to sing individual le
> If computer can simulate any sound and music instruments, what's the point of practicing instruments?
“What’s the point of learning to draw when we have video cameras?”
It actually can’t, at least not for sane amount of money.
A single instrument has incredible amount of subtleties to it none of which are properly represented in computer simula… actually that’s not even a simulation, but a synthesizer.
What most computer synthesizers do is that they have a recording of a live instruments, and then basically stitch those recordings together.
This is kinda as if you were asked to sing individual letters of alphabet, and then recording would’ve been stitched from letters you sang. This can be very awkward, but for many situations are passable.
Few example….
Piano - I’ve yet to see a synthesizer that can simulate all the physical properties of it, specifically sustain pedal. Typical syntehsizer produces very dry sounding piano sound, without typical ROAR of a grand piano. For example, on a real thing if you press sustain pedal and sing, piano will respond. If you knock on the case with sustain pressed you’ll hear typical sound “humm” sound. Overtones responding to each other, in a real piano during real play all those resonances correspond to the rich sound piano creates.
Accordion: Good luck simulating all the stuff you can do with bellows control. There are multiple types of tremolo, there’s precise control over individual notes which can increase/decrease, on computer you’ll be very tediously trying to replicate this stuff with curves, I guess.
Violin: Bow control
Then there’s this thing:
Basically, when dealing with a computer sound, most of the time you’re dealing with a cheap and simplified imitation of the real instrument which is nowhere as good as the real thing. The real instrument allows you to use more tools to control sound and express your ideas, and it allows to do that in easier way than dealing with computer program.
To properly simulate an instrument you’ll likely need something on the level of molecular simulation, and this is not at the level where your home PC can handle it. So, it can’t simulate any sound.
The advantage is that there are a LOT of those cheap knockoffs available to you, meaning you can mish-mash them together, but individual pieces you use are usually vastly inferior to the original.
Besides, with an instrument you can actually PLAY something for others, by yourself.
Kinda doesn’t have the same effect as blasting an mp3 through your phone, does it…
For most percussion and string instruments, the best algorithm we have is modal synthesis, so you should look into that.
In plucked strings, every harmonic has beating. This is because, say, a 100hz harmonic is really 3 100hz harmonics (one with the string going up-down, one with the string going left-right, one with the string compressing against itself). These overlapping harmonics have very slightly different frequencies because of reasons like string stiffness or the bridge being stiffer on the X axis than the Y axis. Not only that, but the next harmonic over can have fairly somewhat differ
For most percussion and string instruments, the best algorithm we have is modal synthesis, so you should look into that.
In plucked strings, every harmonic has beating. This is because, say, a 100hz harmonic is really 3 100hz harmonics (one with the string going up-down, one with the string going left-right, one with the string compressing against itself). These overlapping harmonics have very slightly different frequencies because of reasons like string stiffness or the bridge being stiffer on the X axis than the Y axis. Not only that, but the next harmonic over can have fairly somewhat different beat frequencies. This effect can be replicated in karplus strong synthesis (multiple added delay lines with different complex allpass filters in the feedback loop), but it might be easier to do in modal synthesis.
Wind instruments and bowed strings are harder to synthesize… Right now the most popular systems for those are based on synchronized sample layers. In theory these instruments behave in a way similar to nonlinear waveguide synthesis, but it's very hard to correctly tune models (and very easy to get bad models because most models tend to naturally produce square waves but most wind instruments produce a spectrum closer to saw wave).
Yes. Probably 99.99% of everything that comes from a synthesizer.
Synths are good at “imitating” the sound of an acoustic instrument.
Synths are great at creating completely unique sounds that cannot possibly be emulated on an acoustic instrument.
That's the most gratifying aspect of being a synthesist; creating unique tones that are beautiful, fresh and completely different from traditional acoustic sounds.
Is it possible to create an instrument that can play all hearable frequencies?
Sure. Just take a synthesizer that has a white noise generator, and crank that up. If it’s a decent noise generator, it will have all hearable frequencies at once.
If you want only one frequency at a time, just get a sine wave generator and sweep up and down from 20Hz to 20KHz. Done.
The problem is that “can do it” and “is musically worthwhile” are two different things. Most of the higher frequencies aren’t really useful by themselves, but just as overtones and harmonics of a base note - the top key on a standard 88-ke
Is it possible to create an instrument that can play all hearable frequencies?
Sure. Just take a synthesizer that has a white noise generator, and crank that up. If it’s a decent noise generator, it will have all hearable frequencies at once.
If you want only one frequency at a time, just get a sine wave generator and sweep up and down from 20Hz to 20KHz. Done.
The problem is that “can do it” and “is musically worthwhile” are two different things. Most of the higher frequencies aren’t really useful by themselves, but just as overtones and harmonics of a base note - the top key on a standard 88-key piano is at only 4186Hz, and many people, especially if you’re older or have been exposed to a lot of loud noises, can’t hear anything above 8Khz to 10Khz.
So basically, the entire top half the 20–20kHz range may not be audible to your audience at all, and another quarter of it from 5kHz to 10kHz is just insanely high notes that aren’t really useful. (There’s also the detail that 5kHz to 20kHz is only 2 octaves, so it’s not as big a range as it sounds)
And even in the 20–5kHz range, being able to play *all* frequencies isn’t as useful as you think - if you play 440Hz and 883Hz at the same time, it will sound out of tune and dissonant, especially to people who have only listened to music using a well-tempered 12-note octave. And if you play 440 and then play 883 separately,the vast majority of people won’t notice that it’s not a perfect octave, so being able to play 880, 881, 882, and 883, and fractions thereof, isn’t as useful.
There’s plenty of microtonal scales that use something other than well-tempered or other than 12 notes per octave - and there are plenty of instruments that can play in those scales. But even there, being able to play *all* frequencies will still result in dissonance and being accused of playing out of tune….
(Meanwhile, out in the real world, any decent guitarist is able to bend any note from E2 (82Hz) a half step up to the next note, all the way to E6 (1318Hz) so it’s got that range covered already, and any synthesizer worth buying has a pitch bend wheel and the ability to bend notes through the vast majority of the entire 20–5kHz range. So depending how picky you are about the stuff above 5kHz, the answer is “Not only is it possible, but decent gear can do it already)….
They are. I have a number of them in the computer that I’m using to type this on.
All the major music companies, Yamaha, Korg, Roland, Linn, Moog, and the like, have a science department where they’re always inventing new stuff. First they’ll put it in an expensive housing and musicians will go into hock to be able to afford them, but eventually, the sounds in them, like those of the MiniMoog, the ARP 2600, the Linn LM1, and the Roland TR-808, will be available as software for your laptop. As will algorhythmic composers, and full recording studios. I have way more firepower in this MacBook Pro
They are. I have a number of them in the computer that I’m using to type this on.
All the major music companies, Yamaha, Korg, Roland, Linn, Moog, and the like, have a science department where they’re always inventing new stuff. First they’ll put it in an expensive housing and musicians will go into hock to be able to afford them, but eventually, the sounds in them, like those of the MiniMoog, the ARP 2600, the Linn LM1, and the Roland TR-808, will be available as software for your laptop. As will algorhythmic composers, and full recording studios. I have way more firepower in this MacBook Pro than I did in the entire recording studio that we cut gold records in back in the 70s, and for which we paid the equivalent of $1000 an hour. The engineers and tape cost extra.
$1000 will now get you a computer that could burn down CBS Recorders, if they hadn’t torn the place down already.
Here’s a bit of light reading on what’s new.
Yes, it is trivially easy. Just pitch shift it up or down until it’s out of the range of the instrument. Or use a harmonizer, or distortion, or phaser, or delay, or bit crusher, or any of the other billion digital effects that can’t be reproduced acoustically.
Do you ever think computer generated music will overtake physical musical instruments?
Oh! You haven’t noticed yet?
It’s been that way for a long time now already….
Maybe less so for TV and videos - computers etc don’t look as fun so performers still like to jump around hitting instruments when they’re showing off.
But for sound…. You’re already in a world where the sound usually comes from 0s and 1s - binary digits.
There’s still plenty of the old physical stuff being used but it comes to us via the digital world far more often than not. Gone are the days of families gathered around the piano for
Do you ever think computer generated music will overtake physical musical instruments?
Oh! You haven’t noticed yet?
It’s been that way for a long time now already….
Maybe less so for TV and videos - computers etc don’t look as fun so performers still like to jump around hitting instruments when they’re showing off.
But for sound…. You’re already in a world where the sound usually comes from 0s and 1s - binary digits.
There’s still plenty of the old physical stuff being used but it comes to us via the digital world far more often than not. Gone are the days of families gathered around the piano for an evenings entertainment. People gather less and frequently each have their own computing device to pay attention to.
It’s been happening for a long time now. Look at the instruments people have used in the recent past for guitar music:
The 1990s - Nirvana - Smells Like Teen Spirit:
The 1980s - Def Leppard - Hysteria:
No.
Not in theory, because in theory there are infinite combinations between notes. First of all, the length of a combination of notes does not have an upper bound. Second, there are infinite frequencies with which to form melodies of infinite length…
Oh you mean in practice?
Not really either. Even staying in the western 12 equal temperament tuning system, assuming a reasonable upper bound for the length of a complex piece of music (1 hour, maybe?) and making sure all the notes played are not played at a ludicrous speed (otherwise you could in theory have an infinate amount of notes in an hour).
No.
Not in theory, because in theory there are infinite combinations between notes. First of all, the length of a combination of notes does not have an upper bound. Second, there are infinite frequencies with which to form melodies of infinite length…
Oh you mean in practice?
Not really either. Even staying in the western 12 equal temperament tuning system, assuming a reasonable upper bound for the length of a complex piece of music (1 hour, maybe?) and making sure all the notes played are not played at a ludicrous speed (otherwise you could in theory have an infinate amount of notes in an hour).
I don’t care enough to calculate the immense number that would still come about as you take into account:
- Different combinations of notes into melodies.
- Different combination of melodies into polyphonies.
- Different combinations of notes into harmonies in one instance.
- Different combinations of harmonies in one instance into harmonies over time.
- Different combinations of melody and harmony into melody and accompaniment arrangements.
- Differences in rhythm applied to melodies, polyphonies and harmonies, either as one change to the whole, or multiple significant little changes.
The thing is: All of what I wrote above will in practice make the music sound different.
In addition to that, there are many other things that will make music sound different. Have an orchestra or a pianist play the exact same notes in exact same rhythm and you will still have two different renditions of the music, because the timbres of the instruments are different.
When you listen to beautifully performed music streaming online, do you attribute it to humans or machines? Perhaps both, but I suspect that this does not take anything away from your appreciation of the piece.
I'm not comparing streaming music with AI, but there is one fundamental similarity that will (assuming AI gets developed to do so) enable both: systematic mechanisms.
Before a piece of online music hits your eardrums, it's processed in a way that makes it completely unrecognizable to us as music. It gets sampled to digital bits analogous to sound waves, it gets packaged as individual sl
When you listen to beautifully performed music streaming online, do you attribute it to humans or machines? Perhaps both, but I suspect that this does not take anything away from your appreciation of the piece.
I'm not comparing streaming music with AI, but there is one fundamental similarity that will (assuming AI gets developed to do so) enable both: systematic mechanisms.
Before a piece of online music hits your eardrums, it's processed in a way that makes it completely unrecognizable to us as music. It gets sampled to digital bits analogous to sound waves, it gets packaged as individual slices of information and sent across wires, fibers, or through the air, and eventually reprocessed and repackaged in such a way as to be again recognizable to us as music. Between the source human musician and the destination human recipient there are essentially just a bunch of mechanical (albeit digital) processes, transforming information from one form to another. This is not unlike energy transformations: the number and type of changes might make the sources at times unrecognizable, but they do not negate the resulting usefulness or effect of their energy input. I propose to you that any mechanical processes introduced between the musician and the listener do not take away the humanness of the music.
I offer this analogy and perspective because, as I see it, all AI will be (as impressive a system as it may be) is an ordered system of digitally mechanical processes, transforming inputs to outputs. To the degree that such as system outputs something impressively similar to human behavior, it would have taken an impressive amount of human planning, designing, building, and training in order do so. Even when considering systems like IBM's Watson which beat the best human competitors at the game Jeopardy, what you are really observing at the moment that system outputs an answer to the given question is the culmination of years worth of human input throughout its entire development. It's been tweaked and calibrated, configured and arranged, trained and honed, for several years and with thousands of hours worth human input and training information. It's output is the digital distillate of human intellect.
Just like the consumable distillate of whiskey made from beer made from grain made from energy from the Sun, when you drink of it you're not tasting any less of its source, but rather, more: in its condensed and refined form. Cut off the roots, cut off the fruit.
Going back to the question at hand, speaking in hypotheticals, AI could perform any repeatable human task/behavior/output to whatever degree of precision the developers aspire to attain. This will not, however, diminish the beauty or any other attribute of the human authors, but rather, focus and distill their efforts into transformed outputs to experience.
Going further, one characteristic of human output that I believe will not effectively be attained is our faculty of authorship; that which enables us to make ethical decisions. AI "decisions" will always be the synthesized distillate of a network of mechanisms, only ever enabled by its human authors during development and training. This is why I believe that all of our AI developments will teach us more about our own intellect than it will any other, for that is all it will have access to.
THE house lights dimmed at the BTI Center for the Performing Arts in Raleigh, N.C., one night last month, the stage lights came up on the grand piano, and in front of a rapt audience Alfred Cortot played Chopin's Prelude in G (Op. 28, No. 3), as he had not for nearly 80 years.
Cortot is dead, of course. He was not present in physical form, nor was anyone else sitting at the keyboard of the Yamaha Disklavier Pro as the keys rose and fell. But this was his performance come back to life: his gentle touch, his luminosity, even his mistakes, like the light brush of an extra note at the periphery of
THE house lights dimmed at the BTI Center for the Performing Arts in Raleigh, N.C., one night last month, the stage lights came up on the grand piano, and in front of a rapt audience Alfred Cortot played Chopin's Prelude in G (Op. 28, No. 3), as he had not for nearly 80 years.
Cortot is dead, of course. He was not present in physical form, nor was anyone else sitting at the keyboard of the Yamaha Disklavier Pro as the keys rose and fell. But this was his performance come back to life: his gentle touch, his luminosity, even his mistakes, like the light brush of an extra note at the periphery of the final chord.
I remember reading about a competition based on this, in which programmers produced near-identical versions of great performances, but I can't find that article right now. This is close:
These ghostly performances were in aid of an annual competition called Rencon, which pits different computer systems against each other in a battle of musical expression. It is considered to be a musical Turing Test of sorts; the aim is to create a system that can play music in a manner that is indistinguishable from a human.
--Rencon: a 'Turing Test for musical expression' (Wired UK)
But it's not exactly what I remember, because in that competition specific performances were recreated.
Not necessarily.
The first mass-produced solid-body electric guitar, the Fender Esquire—which was almost immediately redesigned and relaunched as the Fender Broadcaster—was designed by Leo Fender.
Mr Fender couldn’t play the guitar.
He didn’t need to be able to. He fully understood the electronics of wiring a guitar for amplification, he’d worked on lap steel guitars for years, and things like fret spacing have been well-understood by luthiers for centuries. What he pioneered, which nobody had done before him to the same extent, was figuring out what shape a solid guitar ought to be.
It took him a
Not necessarily.
The first mass-produced solid-body electric guitar, the Fender Esquire—which was almost immediately redesigned and relaunched as the Fender Broadcaster—was designed by Leo Fender.
Mr Fender couldn’t play the guitar.
He didn’t need to be able to. He fully understood the electronics of wiring a guitar for amplification, he’d worked on lap steel guitars for years, and things like fret spacing have been well-understood by luthiers for centuries. What he pioneered, which nobody had done before him to the same extent, was figuring out what shape a solid guitar ought to be.
It took him a few goes. Here’s the original prototype:
(Source: By FGF_museum_01._Leo_and_early_models.jpg: Mr. Littlehandderivative work: Clusternote [CC BY 2.0 (Creative Commons - Attribution 2.0 Generic - CC BY 2.0)], via Wikimedia Commons)
Here’s a production model of one of those early Esquires. Notice the differently-shaped headstock:
(Source: By 1954_Fender_Esquire_($27,000),_Vintage_Guitar_show,_SXSW2009.jpg: 3rdparty!derivative work: Clusternote [CC BY 2.0 (Creative Commons - Attribution 2.0 Generic - CC BY 2.0)], via Wikimedia Commons)
And here’s the Broadcaster. The main difference is the extra pickup, but harder to see is the addition of a metal truss rod to stop the neck from bending, which was a problem with the original Esquire:
(Source: By Fender_Broadcaster_(1950)_&_left-handed_Stratocaster,_Museum_of_Making_Music.jpg: doryfourderivative work: Clusternote [CC BY-SA 2.0 (Creative Commons - Attribution-ShareAlike 2.0 Generic - CC BY-SA 2.0)], via Wikimedia Commons)
That’s pretty much what they look like today.
And all designed by a guy who was an engineer, but not a musician.
I think with passing of time, it can be real. I just researched so many aI website which actually give a music voiceover, I with my own lyrics generated a music. Here is also the link what I created
This song didn't matched humans beautiful voice but trust me, even at this phase if it can generate some music like this, then how much beautiful somg it will be able to create with time. AI is for real.
Nope, no way, no how. Everything that has been invented is it. There's nothing left to invent, so don't waste your time trying to invent anything new.
The harp is pretty close, only missing a few notes compared to the piano. But no other orchestral instrument comes even close.
There is one though which can play notes both lower and higher than a piano: the pipe organ.
You chose the least musical person ever to ask this, Jay.
Ah, well, I'll try.
From my incredibly limited musical knowledge, I know that musical instruments have a tendency to evolve and adapt to become more convenient. Sadly, I can't see how musical instruments could get any more convenient unless they started playing themselves.
Now that I think of it, that's not a bad idea.
Oh, wait, that's why we have YouTube.
The only thing I really do feel about the future of musical instruments is that they are not likely to come in contact with me very soon, and that is the best for eardrums of everyone arou
You chose the least musical person ever to ask this, Jay.
Ah, well, I'll try.
From my incredibly limited musical knowledge, I know that musical instruments have a tendency to evolve and adapt to become more convenient. Sadly, I can't see how musical instruments could get any more convenient unless they started playing themselves.
Now that I think of it, that's not a bad idea.
Oh, wait, that's why we have YouTube.
The only thing I really do feel about the future of musical instruments is that they are not likely to come in contact with me very soon, and that is the best for eardrums of everyone around me.
Never. You can't play a guitar with a keyboard and make it sound like a guitar. You don't have the option to play the same note at several different frets on different strings, all with a different sound. You can't strum a keyboard. You can't slide up or down the fretboard. You can't hammer on or pull off. You can't palm-mute. You can't do 32nd note tremolo picking. You don't get the interaction with the amp (feedback, sustain, effects pedals) or the sound of the amp speaker moving air in front of a mic (which is the only proper way to record a guitar).
Same with any other acoustic instrument:
Never. You can't play a guitar with a keyboard and make it sound like a guitar. You don't have the option to play the same note at several different frets on different strings, all with a different sound. You can't strum a keyboard. You can't slide up or down the fretboard. You can't hammer on or pull off. You can't palm-mute. You can't do 32nd note tremolo picking. You don't get the interaction with the amp (feedback, sustain, effects pedals) or the sound of the amp speaker moving air in front of a mic (which is the only proper way to record a guitar).
Same with any other acoustic instrument: they all have unique organic ways of interacting with them with your hands, feet, and mouth, things that can't be replicated with digital samples and a keyboard controller.
All the other answers here are obsolete, from older people who wish we were still living in the past. There is no longer any point to practicing music or doing anything besides consuming. The largest segment by far of today’s music consumers neither know nor care about that fossil-music from before 2008 or so, and they accept that auto-tuned MIDI-generated pablum pumped at them everywhere. You can’t say they enjoy it, but enjoyment is an ancient concept too, a taboo mental condition that leads to addiction. You kids get off my lawn.