read huge text file from end

Q

quickcur

Hi,

I have very large text files and I am only interested in the last 200
lines in each file. How can I read a huge text file line by line from
the end, something line the "tail" command in Unix?

Thanks,

qq
 
E

Eric Sosman

Hi,

I have very large text files and I am only interested in the last 200
lines in each file. How can I read a huge text file line by line from
the end, something line the "tail" command in Unix?

Do as "tail" does: Get the size of the file, seek to
a position (200 * average_line_length + safety_margin) bytes
before the end, and start reading. Be prepared for some
glitches if you land in the middle of a multi-byte sequence;
you may need to be tolerant of a malformed line and/or
character decoding errors when you start reading.

Of course, this simply isn't going to work for files
that contain statefully-encoded regions, or that have been
progressively compressed or encrypted. For "very large"
files, compression is distinctly likely -- even if you're
not using it now, you might want to ponder before committing
to a strategy that would prevent using it in the future.
 
O

Oliver Wong

Eric Sosman said:
Do as "tail" does: Get the size of the file, seek to
a position (200 * average_line_length + safety_margin) bytes
before the end, and start reading. Be prepared for some
glitches if you land in the middle of a multi-byte sequence;
you may need to be tolerant of a malformed line and/or
character decoding errors when you start reading.

Of course, this simply isn't going to work for files
that contain statefully-encoded regions, or that have been
progressively compressed or encrypted. For "very large"
files, compression is distinctly likely -- even if you're
not using it now, you might want to ponder before committing
to a strategy that would prevent using it in the future.

Hopefully, the compression would be handled by the underlying OS, and it
would all work "transparently" to your application.

Otherwise, you're no longer dealing with text files (in the traditional
sense), and if you've got custom file formats, you could do tricks like
actually encode the offset of the 200th line from the end into the header.

- Oliver
 
E

Eric Sosman

Oliver Wong wrote On 10/31/06 17:23,:
(e-mail address removed) wrote On 10/31/06 15:45,:
Hi,

I have very large text files and I am only interested in the last 200
lines in each file. How can I read a huge text file line by line from
the end, something line the "tail" command in Unix?

Do as "tail" does: Get the size of the file, seek to
a position (200 * average_line_length + safety_margin) bytes
before the end, [...]

Of course, this simply isn't going to work for files
that contain statefully-encoded regions, or that have been
progressively compressed or encrypted. For "very large"
files, compression is distinctly likely -- even if you're
not using it now, you might want to ponder before committing
to a strategy that would prevent using it in the future.


Hopefully, the compression would be handled by the underlying OS, and it
would all work "transparently" to your application.

It might "work" in the sense of "get to the data as
desired," but only by reading and decompressing everything
before that point -- which sort of vitiates the performance
advantage of the seek, don't you think?
 
M

Mike Schilling

Eric Sosman said:
It might "work" in the sense of "get to the data as
desired," but only by reading and decompressing everything
before that point -- which sort of vitiates the performance
advantage of the seek, don't you think?

But that's not how OS file compression works. Generally, there's a page
size (8K or thereabouts), and each page is compressed seperately, with the
OS keeping track of where each compressed page actually starts. A
random-access read requires figuring out where the pages containing the byte
range live and decompressing only those pages.
 
E

Eric Sosman

Mike said:
But that's not how OS file compression works. Generally, there's a page
size (8K or thereabouts), and each page is compressed seperately, with the
OS keeping track of where each compressed page actually starts. A
random-access read requires figuring out where the pages containing the byte
range live and decompressing only those pages.

Look among the bits and pieces of snippage lying about on the
cutting-room floor, and you'll notice I wrote about files that
were "progressively compressed" or "progressively encrypted."
My terminology is probably inexact, but I meant "progressivly"
to describe the sort of compressor/encryptor whose state at a
given point in the data stream is a function of the entire history
of the stream up to that point. gzip, for example.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,997
Messages
2,570,239
Members
46,827
Latest member
DMUK_Beginner

Latest Threads

Top