csv
packagestandard libraryThis package is not in the latest version of its module.
Details
Validgo.mod file
The Go module system was introduced in Go 1.11 and is the official dependency management solution for Go.
Redistributable license
Redistributable licenses place minimal restrictions on how software can be used, modified, and redistributed.
Tagged version
Modules with tagged versions give importers more predictable builds.
Stable version
When a project reaches major version v1 it is considered stable.
- Learn more about best practices
Repository
Links
Documentation¶
Overview¶
Package csv reads and writes comma-separated values (CSV) files.There are many kinds of CSV files; this package supports the formatdescribed inRFC 4180, except thatWriter uses LFinstead of CRLF as newline character by default.
A csv file contains zero or more records of one or more fields per record.Each record is separated by the newline character. The final record mayoptionally be followed by a newline character.
field1,field2,field3
White space is considered part of a field.
Carriage returns before newline characters are silently removed.
Blank lines are ignored. A line with only whitespace characters (excludingthe ending newline character) is not considered a blank line.
Fields which start and stop with the quote character " are calledquoted-fields. The beginning and ending quote are not part of thefield.
The source:
normal string,"quoted-field"
results in the fields
{`normal string`, `quoted-field`}Within a quoted-field a quote character followed by a second quotecharacter is considered a single quote.
"the ""word"" is true","a ""quoted-field"""
results in
{`the "word" is true`, `a "quoted-field"`}Newlines and commas may be included in a quoted-field
"Multi-linefield","comma is ,"
results in
{`Multi-linefield`, `comma is ,`}Index¶
Examples¶
Constants¶
This section is empty.
Variables¶
var (ErrBareQuote =errors.New("bare \" in non-quoted-field")ErrQuote =errors.New("extraneous or missing \" in quoted-field")ErrFieldCount =errors.New("wrong number of fields")// Deprecated: ErrTrailingComma is no longer used.ErrTrailingComma =errors.New("extra delimiter at end of line"))
These are the errors that can be returned in [ParseError.Err].
Functions¶
This section is empty.
Types¶
typeParseError¶
type ParseError struct {StartLineint// Line where the record startsLineint// Line where the error occurredColumnint// Column (1-based byte index) where the error occurredErrerror// The actual error}A ParseError is returned for parsing errors.Line and column numbers are 1-indexed.
func (*ParseError)Error¶
func (e *ParseError) Error()string
func (*ParseError)Unwrap¶added ingo1.13
func (e *ParseError) Unwrap()error
typeReader¶
type Reader struct {// Comma is the field delimiter.// It is set to comma (',') by NewReader.// Comma must be a valid rune and must not be \r, \n,// or the Unicode replacement character (0xFFFD).Commarune// Comment, if not 0, is the comment character. Lines beginning with the// Comment character without preceding whitespace are ignored.// With leading whitespace the Comment character becomes part of the// field, even if TrimLeadingSpace is true.// Comment must be a valid rune and must not be \r, \n,// or the Unicode replacement character (0xFFFD).// It must also not be equal to Comma.Commentrune// FieldsPerRecord is the number of expected fields per record.// If FieldsPerRecord is positive, Read requires each record to// have the given number of fields. If FieldsPerRecord is 0, Read sets it to// the number of fields in the first record, so that future records must// have the same field count. If FieldsPerRecord is negative, no check is// made and records may have a variable number of fields.FieldsPerRecordint// If LazyQuotes is true, a quote may appear in an unquoted field and a// non-doubled quote may appear in a quoted field.LazyQuotesbool// If TrimLeadingSpace is true, leading white space in a field is ignored.// This is done even if the field delimiter, Comma, is white space.TrimLeadingSpacebool// ReuseRecord controls whether calls to Read may return a slice sharing// the backing array of the previous call's returned slice for performance.// By default, each call to Read returns newly allocated memory owned by the caller.ReuseRecordbool// Deprecated: TrailingComma is no longer used.TrailingCommabool// contains filtered or unexported fields}A Reader reads records from a CSV-encoded file.
As returned byNewReader, a Reader expects input conforming toRFC 4180.The exported fields can be changed to customize the details before thefirst call toReader.Read orReader.ReadAll.
The Reader converts all \r\n sequences in its input to plain \n,including in multiline field values, so that the returned data doesnot depend on which line-ending convention an input file uses.
Example¶
package mainimport ("encoding/csv""fmt""io""log""strings")func main() {in := `first_name,last_name,username"Rob","Pike",robKen,Thompson,ken"Robert","Griesemer","gri"`r := csv.NewReader(strings.NewReader(in))for {record, err := r.Read()if err == io.EOF {break}if err != nil {log.Fatal(err)}fmt.Println(record)}}Output:[first_name last_name username][Rob Pike rob][Ken Thompson ken][Robert Griesemer gri]
Example (Options)¶
This example shows how csv.Reader can be configured to handle othertypes of CSV files.
package mainimport ("encoding/csv""fmt""log""strings")func main() {in := `first_name;last_name;username"Rob";"Pike";rob# lines beginning with a # character are ignoredKen;Thompson;ken"Robert";"Griesemer";"gri"`r := csv.NewReader(strings.NewReader(in))r.Comma = ';'r.Comment = '#'records, err := r.ReadAll()if err != nil {log.Fatal(err)}fmt.Print(records)}Output:[[first_name last_name username] [Rob Pike rob] [Ken Thompson ken] [Robert Griesemer gri]]
func (*Reader)FieldPos¶added ingo1.17
FieldPos returns the line and column corresponding tothe start of the field with the given index in the slice most recentlyreturned byReader.Read. Numbering of lines and columns starts at 1;columns are counted in bytes, not runes.
If this is called with an out-of-bounds index, it panics.
func (*Reader)InputOffset¶added ingo1.19
InputOffset returns the input stream byte offset of the current readerposition. The offset gives the location of the end of the most recentlyread row and the beginning of the next row.
func (*Reader)Read¶
Read reads one record (a slice of fields) from r.If the record has an unexpected number of fields,Read returns the record along with the errorErrFieldCount.If the record contains a field that cannot be parsed,Read returns a partial record along with the parse error.The partial record contains all fields read before the error.If there is no data left to be read, Read returns nil,io.EOF.If [Reader.ReuseRecord] is true, the returned slice may be sharedbetween multiple calls to Read.
func (*Reader)ReadAll¶
ReadAll reads all the remaining records from r.Each record is a slice of fields.A successful call returns err == nil, not err ==io.EOF. Because ReadAll isdefined to read until EOF, it does not treat end of file as an error to bereported.
Example¶
package mainimport ("encoding/csv""fmt""log""strings")func main() {in := `first_name,last_name,username"Rob","Pike",robKen,Thompson,ken"Robert","Griesemer","gri"`r := csv.NewReader(strings.NewReader(in))records, err := r.ReadAll()if err != nil {log.Fatal(err)}fmt.Print(records)}Output:[[first_name last_name username] [Rob Pike rob] [Ken Thompson ken] [Robert Griesemer gri]]
typeWriter¶
type Writer struct {Commarune// Field delimiter (set to ',' by NewWriter)UseCRLFbool// True to use \r\n as the line terminator// contains filtered or unexported fields}A Writer writes records using CSV encoding.
As returned byNewWriter, a Writer writes records terminated by anewline and uses ',' as the field delimiter. The exported fields can bechanged to customize the details beforethe first call toWriter.Write orWriter.WriteAll.
[Writer.Comma] is the field delimiter.
If [Writer.UseCRLF] is true,the Writer ends each output line with \r\n instead of \n.
The writes of individual records are buffered.After all data has been written, the client should call theWriter.Flush method to guarantee all data has been forwarded tothe underlyingio.Writer. Any errors that occurred shouldbe checked by calling theWriter.Error method.
Example¶
package mainimport ("encoding/csv""log""os")func main() {records := [][]string{{"first_name", "last_name", "username"},{"Rob", "Pike", "rob"},{"Ken", "Thompson", "ken"},{"Robert", "Griesemer", "gri"},}w := csv.NewWriter(os.Stdout)for _, record := range records {if err := w.Write(record); err != nil {log.Fatalln("error writing record to csv:", err)}}// Write any buffered data to the underlying writer (standard output).w.Flush()if err := w.Error(); err != nil {log.Fatal(err)}}Output:first_name,last_name,usernameRob,Pike,robKen,Thompson,kenRobert,Griesemer,gri
func (*Writer)Error¶added ingo1.1
Error reports any error that has occurred duringa previousWriter.Write orWriter.Flush.
func (*Writer)Flush¶
func (w *Writer) Flush()
Flush writes any buffered data to the underlyingio.Writer.To check if an error occurred during Flush, callWriter.Error.
func (*Writer)Write¶
Write writes a single CSV record to w along with any necessary quoting.A record is a slice of strings with each string being one field.Writes are buffered, soWriter.Flush must eventually be called to ensurethat the record is written to the underlyingio.Writer.
func (*Writer)WriteAll¶
WriteAll writes multiple CSV records to w usingWriter.Write andthen callsWriter.Flush, returning any error from the Flush.
Example¶
package mainimport ("encoding/csv""log""os")func main() {records := [][]string{{"first_name", "last_name", "username"},{"Rob", "Pike", "rob"},{"Ken", "Thompson", "ken"},{"Robert", "Griesemer", "gri"},}w := csv.NewWriter(os.Stdout)w.WriteAll(records) // calls Flush internallyif err := w.Error(); err != nil {log.Fatalln("error writing csv:", err)}}Output:first_name,last_name,usernameRob,Pike,robKen,Thompson,kenRobert,Griesemer,gri