Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: 10 TB of a data file.


Copy link to this message
-
Re: 10 TB of a data file.
>From wikipedia
The actual amount of disk
space<http://en.wikipedia.org/wiki/Computer_data_storage>consumed by
the file depends on the file
system <http://en.wikipedia.org/wiki/File_system>. The maximum file size a
file system supports depends on the number of
bits<http://en.wikipedia.org/wiki/Bit>reserved to store size
information and the total size of the file system

more of it you can read on http://en.wikipedia.org/wiki/File_size
On Fri, Apr 12, 2013 at 1:40 PM, Sai Sai <[EMAIL PROTECTED]> wrote:

> In real world can a file be of this big size as 10 TB?
> Will the data be put into a txt file or what kind of a file?
> If someone would like to open such a big file to look at the content will
> OS support opening such big files?
> If not how to handle this kind of scenario?
> Any input will be appreciated.
> Thanks
> Sai
>

--
Nitin Pawar
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB