Sort
Profile photo for Rimantas Venckus

I have very interesting question about file import into R. I have a file mydata.cvs This file is updating by the software every 2 seconds and data is added. How do you work out this problem ? I mean how to refresh data in R ? Do you use some file read/write ? I just wondering why doesn't exist stream data package in such useful language ? I know that R doesnt support streams by itself, but would be nice to have one. Nowadays the data changes very quickly and it will be very usefull

Profile photo for Quora User

Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.

And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.

Don’t wait like I did. Go ahead and start using these money secrets today!

1. Cancel Your Car Insurance

You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily,

Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.

And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.

Don’t wait like I did. Go ahead and start using these money secrets today!

1. Cancel Your Car Insurance

You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily, this problem is easy to fix.

Don’t waste your time browsing insurance sites for a better deal. A company called Insurify shows you all your options at once — people who do this save up to $996 per year.

If you tell them a bit about yourself and your vehicle, they’ll send you personalized quotes so you can compare them and find the best one for you.

Tired of overpaying for car insurance? It takes just five minutes to compare your options with Insurify and see how much you could save on car insurance.

2. Ask This Company to Get a Big Chunk of Your Debt Forgiven

A company called National Debt Relief could convince your lenders to simply get rid of a big chunk of what you owe. No bankruptcy, no loans — you don’t even need to have good credit.

If you owe at least $10,000 in unsecured debt (credit card debt, personal loans, medical bills, etc.), National Debt Relief’s experts will build you a monthly payment plan. As your payments add up, they negotiate with your creditors to reduce the amount you owe. You then pay off the rest in a lump sum.

On average, you could become debt-free within 24 to 48 months. It takes less than a minute to sign up and see how much debt you could get rid of.

3. You Can Become a Real Estate Investor for as Little as $10

Take a look at some of the world’s wealthiest people. What do they have in common? Many invest in large private real estate deals. And here’s the thing: There’s no reason you can’t, too — for as little as $10.

An investment called the Fundrise Flagship Fund lets you get started in the world of real estate by giving you access to a low-cost, diversified portfolio of private real estate. The best part? You don’t have to be the landlord. The Flagship Fund does all the heavy lifting.

With an initial investment as low as $10, your money will be invested in the Fund, which already owns more than $1 billion worth of real estate around the country, from apartment complexes to the thriving housing rental market to larger last-mile e-commerce logistics centers.

Want to invest more? Many investors choose to invest $1,000 or more. This is a Fund that can fit any type of investor’s needs. Once invested, you can track your performance from your phone and watch as properties are acquired, improved, and operated. As properties generate cash flow, you could earn money through quarterly dividend payments. And over time, you could earn money off the potential appreciation of the properties.

So if you want to get started in the world of real-estate investing, it takes just a few minutes to sign up and create an account with the Fundrise Flagship Fund.

This is a paid advertisement. Carefully consider the investment objectives, risks, charges and expenses of the Fundrise Real Estate Fund before investing. This and other information can be found in the Fund’s prospectus. Read them carefully before investing.

4. Earn Up to $50 this Month By Answering Survey Questions About the News — It’s Anonymous

The news is a heated subject these days. It’s hard not to have an opinion on it.

Good news: A website called YouGov will pay you up to $50 or more this month just to answer survey questions about politics, the economy, and other hot news topics.

Plus, it’s totally anonymous, so no one will judge you for that hot take.

When you take a quick survey (some are less than three minutes), you’ll earn points you can exchange for up to $50 in cash or gift cards to places like Walmart and Amazon. Plus, Penny Hoarder readers will get an extra 500 points for registering and another 1,000 points after completing their first survey.

It takes just a few minutes to sign up and take your first survey, and you’ll receive your points immediately.

5. Get Up to $300 Just for Setting Up Direct Deposit With This Account

If you bank at a traditional brick-and-mortar bank, your money probably isn’t growing much (c’mon, 0.40% is basically nothing).

But there’s good news: With SoFi Checking and Savings (member FDIC), you stand to gain up to a hefty 3.80% APY on savings when you set up a direct deposit or have $5,000 or more in Qualifying Deposits and 0.50% APY on checking balances — savings APY is 10 times more than the national average.

Right now, a direct deposit of at least $1K not only sets you up for higher returns but also brings you closer to earning up to a $300 welcome bonus (terms apply).

You can easily deposit checks via your phone’s camera, transfer funds, and get customer service via chat or phone call. There are no account fees, no monthly fees and no overdraft fees. And your money is FDIC insured (up to $3M of additional FDIC insurance through the SoFi Insured Deposit Program).

It’s quick and easy to open an account with SoFi Checking and Savings (member FDIC) and watch your money grow faster than ever.

Read Disclaimer

5. Stop Paying Your Credit Card Company

If you have credit card debt, you know. The anxiety, the interest rates, the fear you’re never going to escape… but a website called AmONE wants to help.

If you owe your credit card companies $100,000 or less, AmONE will match you with a low-interest loan you can use to pay off every single one of your balances.

The benefit? You’ll be left with one bill to pay each month. And because personal loans have lower interest rates (AmONE rates start at 6.40% APR), you’ll get out of debt that much faster.

It takes less than a minute and just 10 questions to see what loans you qualify for.

6. Lock In Affordable Term Life Insurance in Minutes.

Let’s be honest—life insurance probably isn’t on your list of fun things to research. But locking in a policy now could mean huge peace of mind for your family down the road. And getting covered is actually a lot easier than you might think.

With Best Money’s term life insurance marketplace, you can compare top-rated policies in minutes and find coverage that works for you. No long phone calls. No confusing paperwork. Just straightforward quotes, starting at just $7 a month, from trusted providers so you can make an informed decision.

The best part? You’re in control. Answer a few quick questions, see your options, get coverage up to $3 million, and choose the coverage that fits your life and budget—on your terms.

You already protect your car, your home, even your phone. Why not make sure your family’s financial future is covered, too? Compare term life insurance rates with Best Money today and find a policy that fits.

Profile photo for Michael Hochster

You can use getwd() to see where R is looking, and setwd() to change it. Or you can use the full path of the filename.

Profile photo for Patrick Burns

If you are on Windows, you can also use:
file.choose()
where the filename goes and a window pops up in which you can make a choice.

Profile photo for Florian May

Are you trying to learn R from scratch? Google “r tutorial”, and learn the basic operations before trying to solve problems where you get stuck at each step. You're on the right path, keep going! An excellent resource is R for Data Science. Happy learning!

Are you a bot artificially filling Quora feeds? Please stop. Quora mods?

Are you anything than a beginner who got stuck? Behold: https://duckduckgo.com/ (and other search engines) will change your life! Beyond my mild sarcasm, searching is faster than asking.

Profile photo for Quora User

Hope this answers your question:

  1. library(DT) 
  2. library(shiny) 
  3. ui <- fluidPage( 
  4. fileInput('file1', 'Choose file to upload', 
  5. accept = c( 
  6. 'text/csv', 
  7. 'text/comma-separated-values', 
  8. 'text/tab-separated-values', 
  9. 'text/plain', 
  10. '.csv', 
  11. '.tsv' 
  12. ) 
  13. ), 
  14. tags$hr(), 
  15. checkboxInput('header', 'Header', TRUE), 
  16. radioButtons('sep', 'Separator', 
  17. c(Comma=',', 
  18. Semicolon=';', 
  19. Tab='\t'), 
  20. ','), 
  21. radioButtons('quote', 'Quote', 
  22. c 

Hope this answers your question:

  1. library(DT) 
  2. library(shiny) 
  3. ui <- fluidPage( 
  4. fileInput('file1', 'Choose file to upload', 
  5. accept = c( 
  6. 'text/csv', 
  7. 'text/comma-separated-values', 
  8. 'text/tab-separated-values', 
  9. 'text/plain', 
  10. '.csv', 
  11. '.tsv' 
  12. ) 
  13. ), 
  14. tags$hr(), 
  15. checkboxInput('header', 'Header', TRUE), 
  16. radioButtons('sep', 'Separator', 
  17. c(Comma=',', 
  18. Semicolon=';', 
  19. Tab='\t'), 
  20. ','), 
  21. radioButtons('quote', 'Quote', 
  22. c(None='', 
  23. 'Double Quote'='"', 
  24. 'Single Quote'="'"), 
  25. '"'), 
  26. tags$hr(), 
  27. p('If you want a sample .csv or .tsv file to upload,', 
  28. 'you can first download the sample', 
  29. a(href = 'mtcars.csv', 'mtcars.csv'), 'or', 
  30. a(href = 'pressure.tsv', 'pressure.tsv'), 
  31. 'files, and then try uploading them.' 
  32. ), 
  33. DT::dataTableOutput('contents') 
  34. ) 
  35.  
  36.  
  37. server <- function(input, output,session) { 
  38.  
  39.  
  40. output$contents <- DT::renderDataTable({ 
  41.  
  42.  
  43. inFile <- input$file1 
  44.  
  45. if (is.null(inFile)) 
  46. return(NULL) 
  47.  
  48. k<-read.csv(inFile$datapath, header = input$header, 
  49. sep = input$sep, quote = input$quote) 
  50. }) 
  51.  
  52. } 
  53.  
  54.  
  55. shinyApp(ui = ui, server = server) 

Output:

1)

2)

3)

Where do I start?

I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.

Here are the biggest mistakes people are making and how to fix them:

Not having a separate high interest savings account

Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.

Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.

Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of th

Where do I start?

I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.

Here are the biggest mistakes people are making and how to fix them:

Not having a separate high interest savings account

Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.

Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.

Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of the biggest mistakes and easiest ones to fix.

Overpaying on car insurance

You’ve heard it a million times before, but the average American family still overspends by $417/year on car insurance.

If you’ve been with the same insurer for years, chances are you are one of them.

Pull up Coverage.com, a free site that will compare prices for you, answer the questions on the page, and it will show you how much you could be saving.

That’s it. You’ll likely be saving a bunch of money. Here’s a link to give it a try.

Consistently being in debt

If you’ve got $10K+ in debt (credit cards…medical bills…anything really) you could use a debt relief program and potentially reduce by over 20%.

Here’s how to see if you qualify:

Head over to this Debt Relief comparison website here, then simply answer the questions to see if you qualify.

It’s as simple as that. You’ll likely end up paying less than you owed before and you could be debt free in as little as 2 years.

Missing out on free money to invest

It’s no secret that millionaires love investing, but for the rest of us, it can seem out of reach.

Times have changed. There are a number of investing platforms that will give you a bonus to open an account and get started. All you have to do is open the account and invest at least $25, and you could get up to $1000 in bonus.

Pretty sweet deal right? Here is a link to some of the best options.

Having bad credit

A low credit score can come back to bite you in so many ways in the future.

From that next rental application to getting approved for any type of loan or credit card, if you have a bad history with credit, the good news is you can fix it.

Head over to BankRate.com and answer a few questions to see if you qualify. It only takes a few minutes and could save you from a major upset down the line.

How to get started

Hope this helps! Here are the links to get started:

Have a separate savings account
Stop overpaying for car insurance
Finally get out of debt
Start investing with a free bonus
Fix your credit

Profile photo for Hemant Kaithwas

The process of importing and exporting CSV (Comma Separated Values) files in R is straightforward. R provides several packages and functions to import and export CSV files.

To import a CSV file in R, you can use the read.csv function from the base R package. This function is used to read CSV files and returns a data frame. Here's an example of how to use this function:

Code: data <- read.csv("file.csv")

  • In this example, "file.csv" is the name of the file you want to import, and "data" is the name of the object that will hold the data from the file.
  • If your CSV file has a header, you can specify it

The process of importing and exporting CSV (Comma Separated Values) files in R is straightforward. R provides several packages and functions to import and export CSV files.

To import a CSV file in R, you can use the read.csv function from the base R package. This function is used to read CSV files and returns a data frame. Here's an example of how to use this function:

Code: data <- read.csv("file.csv")

  • In this example, "file.csv" is the name of the file you want to import, and "data" is the name of the object that will hold the data from the file.
  • If your CSV file has a header, you can specify it using the header argument:

Code: data <- read.csv("file.csv", header = TRUE)

  • If your CSV file has a different separator, such as a tab or a semicolon, you can specify it using the sep argument:

Code: data <- read.csv("file.tsv", sep = "\t", header = TRUE)

Code: data <- read.csv("file.csv", sep = ";", header = TRUE)

  • To export a CSV file in R, you can use the write.csv function from the base R package. This function is used to write data frames to CSV files. Here's an example of how to use this function:

Code: write.csv(data, "file.csv")

  • In this example, "data" is the name of the object that holds the data you want to export, and "file.csv" is the name of the file you want to create.
  • If you want to use a different separator, you can use the sep argument:

Code: write.csv(data, "file.tsv", sep = "\t")

Code: write.csv(data, "file.csv", sep = ";")

  • You can also use the row.names argument to specify whether to include row names in the output file:

Code: write.csv(data, "file.csv", row.names = FALSE)

In conclusion, importing and exporting CSV files in R is a simple process, and the read.csv and write.csv functions make it easy to handle these operations. You can also use other packages, such as data.table and readr, to import and export CSV files with more options and better performance.

Profile photo for Jon Wayland

Yes. There’s a few ways to do it and it depends on your needs:

  1. You can read all csv files into R with read.csv(), change the column names with colnames(df)<-c(“column1”, “column2”,…), and then combine them with rbind(). Assuming the variable names are different but the variables are not.
  2. You can read all csv files into R with read.csv(), and write each one back to a single csv file on separate sheets with write.xlsx() from the xlsx library. Assuming they do not need to be on the same sheet.
  3. You can read all csv files into R with read.csv(), and join them together with either merge() given a uniqu

Yes. There’s a few ways to do it and it depends on your needs:

  1. You can read all csv files into R with read.csv(), change the column names with colnames(df)<-c(“column1”, “column2”,…), and then combine them with rbind(). Assuming the variable names are different but the variables are not.
  2. You can read all csv files into R with read.csv(), and write each one back to a single csv file on separate sheets with write.xlsx() from the xlsx library. Assuming they do not need to be on the same sheet.
  3. You can read all csv files into R with read.csv(), and join them together with either merge() given a unique joining key or cbind() if the datasets are all the same length and ordered the same way. Assuming the variable names and variables are different, but the data is either relational or from the same population.
Profile photo for Metis Chan

With today’s modern day tools there can be an overwhelming amount of tools to choose from to build your own website. It’s important to keep in mind these considerations when deciding on which is the right fit for you including ease of use, SEO controls, high performance hosting, flexible content management tools and scalability. Webflow allows you to build with the power of code — without writing any.

You can take control of HTML5, CSS3, and JavaScript in a completely visual canvas — and let Webflow translate your design into clean, semantic code that’s ready to publish to the web, or hand off

With today’s modern day tools there can be an overwhelming amount of tools to choose from to build your own website. It’s important to keep in mind these considerations when deciding on which is the right fit for you including ease of use, SEO controls, high performance hosting, flexible content management tools and scalability. Webflow allows you to build with the power of code — without writing any.

You can take control of HTML5, CSS3, and JavaScript in a completely visual canvas — and let Webflow translate your design into clean, semantic code that’s ready to publish to the web, or hand off to developers.

If you prefer more customization you can also expand the power of Webflow by adding custom code on the page, in the <head>, or before the </head> of any page.

Get started for free today!

Trusted by over 60,000+ freelancers and agencies, explore Webflow features including:

  • Designer: The power of CSS, HTML, and Javascript in a visual canvas.
  • CMS: Define your own content structure, and design with real data.
  • Interactions: Build websites interactions and animations visually.
  • SEO: Optimize your website with controls, hosting and flexible tools.
  • Hosting: Set up lightning-fast managed hosting in just a few clicks.
  • Grid: Build smart, responsive, CSS grid-powered layouts in Webflow visually.

Discover why our global customers love and use Webflow | Create a custom website.

Profile photo for Vincent Jiang

Simple upload them to Acho Studio as “Multiple CSV”, and the software will union/merge the files for you automatically.

Make sure that the column names are exactly the same, and the schema for each column (integer, float, boolean, string..) are the same as well.

Simple upload them to Acho Studio as “Multiple CSV”, and the software will union/merge the files for you automatically.

Make sure that the column names are exactly the same, and the schema for each column (integer, float, boolean, string..) are the same as well.

I tried this with for loop and it is working

df=NULL

Combined=NULL

a <- list.files(path=”File_path”,pattern="*.csv",recursive = T,full.names = T)

ind <- 0

for (i in a){

ind <- ind +1

b <- read.csv(i, skip = 0, header = F, nrows = 1, Ás fasteignasala = T)

df=read.csv(i, skip = 1, header = T)

df=df %>% mutate( URL = b$V1 )

Combined=rbind(Combined, df)

}

Are you on the right path to retirement? Investors with $1 million+, download this guide.
Profile photo for Rupkatha Ghosh

The standard C library also offers a higher level fread function. Unlike the read function, you can set a buffer size. Buffers can be good or bad. On the one hand, they reduce the number of disk accesses. On the other hand, they introduce an intermediate step between the disk and you data. That is, they may cause the data to be copied needlessly. Buffers usually makes software faster because copying data in memory is much faster than reading it from disk.

For sequential access, both fread and ifstream are equally fast. Unbuffered IO (read) is slower, as expected. Memory mapping is not

The standard C library also offers a higher level fread function. Unlike the read function, you can set a buffer size. Buffers can be good or bad. On the one hand, they reduce the number of disk accesses. On the other hand, they introduce an intermediate step between the disk and you data. That is, they may cause the data to be copied needlessly. Buffers usually makes software faster because copying data in memory is much faster than reading it from disk.

For sequential access, both fread and ifstream are equally fast. Unbuffered IO (read) is slower, as expected. Memory mapping is not beneficial.
Thanks for the A2A :)

Profile photo for Joe Bologna

Let’s assume you want to parse CSV files that do not contain “,” characters in the fields. (Otherwise things get complicated because it’s necessary to parsed optionally quoted fields).

  1. The simplest way, using just Bash is by setting the IFS shell variable. This changes how lines are tokenized. Using IFS=“,” causes the shell to create a new token when a “,” is seen. (Normally tokens separators are whitespace). 

For example, let’s start with a simple CSV file:

  1. % cat <<EOF > blah.csv 
  2. row1,field1,field2 
  3. row2,field1,field2 
  4. row3,field1,field2 
  5. EOF 

Now let’s parse it with Bash:

  1. % (IFS=,;while read a b c; do ech 

Let’s assume you want to parse CSV files that do not contain “,” characters in the fields. (Otherwise things get complicated because it’s necessary to parsed optionally quoted fields).

  1. The simplest way, using just Bash is by setting the IFS shell variable. This changes how lines are tokenized. Using IFS=“,” causes the shell to create a new token when a “,” is seen. (Normally tokens separators are whitespace). 

For example, let’s start with a simple CSV file:

  1. % cat <<EOF > blah.csv 
  2. row1,field1,field2 
  3. row2,field1,field2 
  4. row3,field1,field2 
  5. EOF 

Now let’s parse it with Bash:

  1. % (IFS=,;while read a b c; do echo $a: $b = $c;done < blah.csv) 
  2. row1: field1 = field2 
  3. row2: field1 = field2 
  4. row3: field1 = field2 

Using Bash for parsing works, but it is pretty slow. Almost all Unix/Linux systems have awk installed. Awk is an awesome high performance scripting language. Here’s how to parse CSV using awk:

  1. % awk -F , '{printf "%s: %s = %s\n", $1, $2, $3}' blah.csv 
  2. row1: field1 = field2 
  3. row2: field1 = field2 
  4. row3: field1 = field2 

Awk is specifically designed for parsing. The script above is very simple. Awk can do much, much more. If you are parsing stuff, I highly recommend learning it.

Let’s assume you have something line NodeJS installed. This is where things get interesting. NodeJS have NPM modules for all sorts of stuff, including parsing CSV files. When an NPM module is installed, the command line interface to the module is usually installed.

I’ll install the csv-parser module so we can use it to convert CSV files with headers into newline-delimited JSON.

  1. % yarn add csv-parser 
  2. % cat <<EOF > blah2.csv 
  3. ROW,FIELD1,FIELD2 
  4. 1,"contents of field 1,1","contents of field 1,2" 
  5. EOF 
  6. % node_modules/csv-parser/bin/csv-parser blah2.csv 
  7. {"ROW":"1","FIELD1":"contents of field 1,1","FIELD2":"contents of field 1,2"} 
  8. {"ROW":"2","FIELD1":"contents of field 2,1","FIELD2":"contents of field 2,2"} 

Notice how the first row is used as the name of the attribute. Also notice how the quoted field parsed. Using quotes allows embedding a “,” in the field.

Profile photo for David Lee

If the data appears clean when you look at it in a spreadsheet program (column headers are the proper attribute names, the columns appear "tidy" with the right values in the right columns, etc.) then I would suggest try opening the .csv file in a text editor, and use the options in "save as" (or perhaps in a format menu) to confirm that the character encoding is UTF-8. The text and file should appear the same to your eye. This has solved problems similar to yours for me many times, and I never could figure out where the change occurred in the process, so this is just one of those things that

If the data appears clean when you look at it in a spreadsheet program (column headers are the proper attribute names, the columns appear "tidy" with the right values in the right columns, etc.) then I would suggest try opening the .csv file in a text editor, and use the options in "save as" (or perhaps in a format menu) to confirm that the character encoding is UTF-8. The text and file should appear the same to your eye. This has solved problems similar to yours for me many times, and I never could figure out where the change occurred in the process, so this is just one of those things that I check when we are having problems.

Profile photo for Zhu Victor

Yes for sure.

you can first read it with Pandas and produce a dataframe, and then convert it to a numpy array:

import pandas as pd

df = pd.read_csv('data.csv')

  1. import pandas as pd 
  2. df = pd.read_csv('data.csv').to_numpy() 

Or just with numpy:

  1. from numpy import genfromtxt 
  2. my_data = genfromtxt('my_file.csv', delimiter=',') 

If your csv has multiple columns and those columns fall in different data types, I would suggest you to stick with pandas df, it give you more flexibility in data types, while with numpy array, you can only have one data type:

  1. mport pandas as pd 
  2. import numpy as np 
  3. df = pd.read_csv(<file-name 

Yes for sure.

you can first read it with Pandas and produce a dataframe, and then convert it to a numpy array:

import pandas as pd

df = pd.read_csv('data.csv')

  1. import pandas as pd 
  2. df = pd.read_csv('data.csv').to_numpy() 

Or just with numpy:

  1. from numpy import genfromtxt 
  2. my_data = genfromtxt('my_file.csv', delimiter=',') 

If your csv has multiple columns and those columns fall in different data types, I would suggest you to stick with pandas df, it give you more flexibility in data types, while with numpy array, you can only have one data type:

  1. mport pandas as pd 
  2. import numpy as np 
  3. df = pd.read_csv(<file-name>, dtype={'Col_A': np.int64, 'Col_B': np.float64}) 
Profile photo for Dinesh Kumar

You can read file using following methods:

readr::read_csv(“path/filename”)

read.csv(“path/filename”,header = T)

You can use Rstudio to import dataset without writing any code by using file menu and importing the type of dataset.

Profile photo for Rugved Modak

CSV stands for comma separated values. Which means it is just a text file, given form of spreadsheet, by separating columns by commas. But in some languages like French comma is used to represent floating point values. That is, in French 3.14 will be written as 3,14. In such cases, you cannot split columns using commas as it might split values too. So in such cases a semicolon ; is used to separate values. Such CSV file has extension CSV2. Sometimes you can also set the custom separator in encoding like | or — . Well now you know what csv2 is for.

I hope that helped. Good luck.

Profile photo for Chaithanya Kumar

R has built in functions to handle csv files.

Reading is as simple as

  1. df = read.csv("filename.csv',header=T) 

To plot a histogram, please check hist() command in help for more fine-grained control.

  1. hist(df$col_name) 

Hope this helps.

Profile photo for Daniel Nebdal

Oops, I didn’t see your comments on the question. This is how I’d have written it:

  1. filenames = Sys.glob("*.csv") 
  2. contents = lapply(filenames, read.csv) 
  3. merged = Reduce(rbind, contents) 

As for why what you have doesn’t work, I don’t think "" is a legal path for list.files. The default for path is "." (the current directory), so just drop the path argument.

I don’t have r.bind, only rbind without the dot; try that spelling.

Profile photo for Sai Teja Nannapaneni

First set working directory using setwd( )

Setwd(“filepath”)

Next use read.csv to read file

Data= read.csv(“filename.csv”,header=T)

If ur file has seperator as ; , : like that u can use read.csv2 function

Data =read.csv2(“filename.csv”,sep=:,header=T)

For more usage about this function u can refer to help page in R studio.

Profile photo for Helene HT
  1. mydata = read.csv("myfile.csv",skip=1) 
  2. firstrow = read.csv("myfile.csv",nrows=0) 
  3. mydata$extracolumn = unlist(firstrow[1,]) 
Profile photo for Wayne Brehob

I use bash for reading CSV files with something like this.

  1. #!/bin/sh 
  2. awk -F, '{ about a hundred lines of code }' $1 

I use bash for things like reading the command-line arguments, providing a Usage statement if they’re not right, and checking the existence of the file, but I use ‘awk’ to process the CSV file.

Bash in itself is not very good at text processing, but there are many other tools that do it well, such as ‘awk’ and bash is good at calling those.

There may be even better tools, such as python, but I personally haven’t tried python for tasks like this.

Profile photo for Dave Wade-Stein

I’d recommend using pandas, so you can read it directly into a pandas DataFrame:

  1. >>> import pandas as pd 
  2. >>> csv = pd.read_csv('weather.csv') 
  3. >>> csv.columns 
  4. Index(['DATE', 'max_tempF', 'mean_tempF', 'min_tempF', 'max_dew_pointF', 
  5. 'mean_dew_pointF', 'min_dew_pointF', 'max_humidity', 'mean_humidity', 
  6. 'min_humidity', 'Max Sea Level PressureIn', 
  7. ' Mean Sea Level PressureIn', ' Min Sea Level PressureIn', 
  8. ' Max VisibilityMiles', ' Mean VisibilityMiles', ' Min VisibilityMiles', 
  9. ' Max Wind SpeedMPH', ' Mean Wind SpeedMPH', ' Max Gust SpeedMPH', 
  10. 'PrecipitationIn', ' Clo 

I’d recommend using pandas, so you can read it directly into a pandas DataFrame:

  1. >>> import pandas as pd 
  2. >>> csv = pd.read_csv('weather.csv') 
  3. >>> csv.columns 
  4. Index(['DATE', 'max_tempF', 'mean_tempF', 'min_tempF', 'max_dew_pointF', 
  5. 'mean_dew_pointF', 'min_dew_pointF', 'max_humidity', 'mean_humidity', 
  6. 'min_humidity', 'Max Sea Level PressureIn', 
  7. ' Mean Sea Level PressureIn', ' Min Sea Level PressureIn', 
  8. ' Max VisibilityMiles', ' Mean VisibilityMiles', ' Min VisibilityMiles', 
  9. ' Max Wind SpeedMPH', ' Mean Wind SpeedMPH', ' Max Gust SpeedMPH', 
  10. 'PrecipitationIn', ' CloudCover', ' Events', ' WindDirDegrees'], 
  11. dtype='object') 
  12. >>> csv.set_index('DATE', inplace=True) 
  13. >>> csv.head() 
  14. max_tempF mean_tempF min_tempF max_dew_pointF mean_dew_pointF \ 
  15. DATE 
  16. 2012-3-10 56 40 24 24 20 
  17. 2012-3-11 67 49 30 43 31 
  18. 2012-3-12 71 62 53 59 55 
  19. 2012-3-13 76 63 50 57 53 
  20. 2012-3-14 80 62 44 58 52 
  21.  
  22. min_dew_pointF max_humidity mean_humidity min_humidity \ 
  23. DATE 
  24. 2012-3-10 16 74 50 26 
  25. 2012-3-11 24 78 53 28 
  26. 2012-3-12 43 90 76 61 
  27. 2012-3-13 47 93 66 38 
  28. 2012-3-14 43 93 68 42 
  29.  
  30. Max Sea Level PressureIn ... Max VisibilityMiles \ 
  31. DATE ... 
  32. 2012-3-10 30.53 ... 10 
  33. 2012-3-11 30.37 ... 10 
  34. 2012-3-12 30.13 ... 10 
  35. 2012-3-13 30.12 ... 10 
  36. 2012-3-14 30.15 ... 10 
  37.  
  38. Mean VisibilityMiles Min VisibilityMiles Max Wind SpeedMPH \ 
  39. DATE 
  40. 2012-3-10 10 10 13 
  41. 2012-3-11 10 10 22 
  42. 2012-3-12 10 6 24 
  43. 2012-3-13 10 4 16 
  44. 2012-3-14 10 10 16 
  45.  
  46. Mean Wind SpeedMPH Max Gust SpeedMPH PrecipitationIn \ 
  47. DATE 
  48. 2012-3-10 6 17.0 0.00 
  49. 2012-3-11 7 32.0 0.00 
  50. 2012-3-12 14 36.0 0.03 
  51. 2012-3-13 5 24.0 0.00 
  52. 2012-3-14 6 22.0 0.00 
  53.  
  54. CloudCover Events WindDirDegrees 
  55. DATE 
  56. 2012-3-10 0 NaN 138 
  57. 2012-3-11 1 Rain 163 
  58. 2012-3-12 6 Rain 190 
  59. 2012-3-13 0 NaN 242 
  60. 2012-3-14 0 NaN 202 
  61.  
  62. [5 rows x 22 columns] 
  63. >>> 
Profile photo for Jonathan Ng

Here is a quick video that shows you how to read and combine .csv files in R

Here’s a more detailed video that shows you how to use variations of this technique to work in more situations.

Profile photo for Belal Khan

Just import it as you import any other file in python. There is a csv module inbuilt in python, and you can use it to read, write csv files.

See this code below.

  1. import csv 
  2. with open("data.csv", 'r') as csvfile: 
  3. reader = csv.reader(csvfile) 
  4. for row in reader: 
  5. print(row) 

It is very simple in python.

Source: Python CSV Reader Tutorial

Thanks for A2A Sagnik!

I know ways to achieve it in Python/Powershell but as you requested to do it with R, here is what I could find on Stack Overflow, hoping this is what you are searching for. Let me know the details if this is not working, So I will try to help you find the right answer.

R code to split big table into smaller .txt files and save to computer?

-USK

Profile photo for Quora User
  1. folder <- "/Users/majerus/Desktop/R/intro/data/"  
  2. # path to folder that holds multiple .csv files 
  3.  
  4. file_list <- list.files(path=folder, pattern="*.csv")  
  5. # create list of all .csv files in folder 
  6.  
  7. # read in each .csv file in file_list and create a data frame with the  
  8. same name as the .csv file 
  9. for (i in 1:length(file_list)){ 
  10.  
  11. assign(file_list[i],  
  12.  
  13. read.csv(paste(folder, file_list[i], sep='')) 
  14.  
  15. )} 

A simple google search would have given you the answer.

Source : Reading and Writing .csv Files in RSudio

Profile photo for Tom Smith

Depends on what you mean by “read.”

By default, a CSV file will open with Excel on Windows machine. You can also open with any text editor (Notepad, Notepad++, Kedit, EditPlus, to name a few)

So if you are trying to do something more that look at the contents, it would help to have more details about what you are trying to accomplish.

Profile photo for Lopamudra Pradhan

If your .csv file is in this format

id,title,number

1,’abd’,2

3,’qwed’,3

4,’wder’,3

Then you can do that using the following query

LOAD DATA INFILE 'file path'

INTO TABLE tableName

FIELDS TERMINATED BY ','

ENCLOSED BY '"'

LINES TERMINATED BY '\n'

IGNORE 1 ROWS;

For more details you can go through this:-

14.2.6 LOAD DATA INFILE Syntax

https://www.quora.com/How-do-I-import-data-into-MySQL-from-CSV-files

Profile photo for Quora User

Ack, don’t. Use data.table and base R. Tidyverse is slow and things like purr are not only slow, but totally redundant. It’s just adding dependencies into ytour code while losing performance.

Something like the following will do it in a fraction of the time (with simpler code)

  1. require(data.table) 
  2. file_names <- list.files("path/to/dir/with-files") 
  3. files <- lapply(file_names, fread) 
  4. dt <- rbindlist(files) 
Profile photo for Hanneke Van Hooijdonk

There is a free program, called DSTV viewer. This tool can be used to open en view .NC or .DSTV files.

The DSTV viewer can be downloaded here: HGG's Free DSTV Viewer - HGG Profiling Specialist

Profile photo for Aritra Pan

fread mmaps the file. This takes some time, and will map the whole file. This means subsequent "read-ins" will be faster.

read.table does not mmap the whole file. It can read in the file line by line [and stop at line 1000000].

You can refer to below links of stackoverflow discussions where it is beautifully explained:

Reason behind speed of fread in data.table package in R

Comparing speed of fread vs. read.table for reading the first 1M rows out of 100M

Profile photo for Quora User
  1. load data infile 'put_filename_here.csv' into table Tablename 
  2. fields terminated by ',' 

This assumes the table already exists and has the same columns, in the same sequence as your CSV-file.

You can also use the mysqlimport utility which does more or less the same thing, but works from the command-line rather than from the SQL-prompt.

Profile photo for Hitesh Patel

I am not a R developer but you can use python Panda library which will help you to handle large Csv.

https://www.dataquest.io/blog/pandas-big-data/

Profile photo for Christopher Singleton

There are so many options for you out there from what I have used.

You can use Active Server Pages in .NET, Python, C++, JAVA, SQL Server Integration Services (SSIS), or C Sharp directly.

It depends on your palatability with what you are using.

About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025