clear
query|
facets|
time |
Search criteria: .
Results from 1 to 10 from
97 (0.0s).
|
|
|
Loading phrases to help you refine your search...
|
[expand - 1 more]
[collapse]
-
How does Spark set task indexes? -
Spark - [mail # user]
|
...Yes I've noticed this one and its related cousin, but not sure this is thesame issue there; our job "properly" ends after 6 attempts.We'll try with disabled speculative mode anyway!On 25 May... |
|
|
|
|
How does Spark set task indexes? - Spark - [mail # user]
|
...Hi,I'm wondering how Spark is setting the "index" of task?I'm asking this question because we have a job that constantly fails attask index = 421.When increasing number of partitions, this t... |
|
|
|
|
|
How to add an accumulator for a Set in Spark -
Spark - [mail # user]
|
...Btw, here is a great article about accumulators and all their relatedtraps!http://imranrashid.com/posts/Spark-Accumulators/ (I'm not the author)On 16 March 2016 at 18:24, swetha kasireddy wr... |
|
|
|
|
df.partitionBy().parquet() java.lang.OutOfMemoryError: GC overhead limit exceeded -
Spark - [mail # user]
|
...Very interested in that topic too, thanks Cheng for the direction!We'll give it a try as well.On 3 December 2015 at 01:40, Cheng Lian wrote:> You may try to set Hadoop conf "parquet... |
|
|
|
|
[expand - 2 more]
[collapse]
-
[POWERED BY] Please add our organization -
Spark - [mail # user]
|
...Oh, right! I think it was user@ at the time I wrote my first message butit's clear now!Thanks Sean,On 2 December 2015 at 11:56, Sean Owen wrote:> Same, not sure if anyone handles th... |
|
|
|
|
[POWERED BY] Please add our organization - Spark - [mail # user]
|
...Hi folks,You're probably busy, but any update on this? :)On 16 November 2015 at 16:04, Adrien Mogenet <[EMAIL PROTECTED]> wrote:> Name: Content Square> URL: http://www.contentsqu... |
|
|
|
[POWERED BY] Please add our organization - Spark - [mail # user]
|
...Name: Content SquareURL: http://www.contentsquare.comDescription:We use Spark to regularly read raw data, convert them into Parquet, andprocess them to create advanced analytics dashboards: ... |
|
|
|
|
|
[HBASE-9260] Timestamp Compactions -
HBase - [issue]
|
...TSCompactionsThe issueOne of the biggest issue I currently deal with is compacting bigstores, i.e. when HBase cluster is 80% full on 4 TB nodes (let saywith a single big table), compactions ... |
|
|
|
|
[expand - 1 more]
[collapse]
-
Split content into multiple Parquet files -
Spark - [mail # user]
|
...My bad, I realized my question was unclear.I did a partitionBy when using saveAsHadoopFile. My question was aboutdoing the same thing for Parquet file. We were using Spark 1.3.x, but nowthat... |
|
|
|
|
Split content into multiple Parquet files - Spark - [mail # user]
|
...Hi there,We've spent several hours to split our input data into several parquetfiles (or several folders, i.e./datasink/output-parquets//foobar.parquet), based on a low-cardinalitykey. This ... |
|
|
|
|
|
[expand - 2 more]
[collapse]
-
High iowait in idle hbase cluster -
Hadoop - [mail # user]
|
...What is your disk configuration? JBOD? If RAID, possibly a dysfunctionalRAID controller, or a constantly-rebuilding array.Do you have any idea at which files are linked the read blocks?On 4 ... |
|
|
|
|
High iowait in idle hbase cluster - Hadoop - [mail # user]
|
...Is the uptime of RS "normal"? No quick and global reboot that could leadinto a regiongi-reallocation-storm?On 3 September 2015 at 18:42, Akmal Abbasov wrote:> Hi Adrien,> I’ve tried to... |
|
|
|
High iowait in idle hbase cluster - Hadoop - [mail # user]
|
...Is your HDFS healthy (fsck /)?Same for hbase hbck?What's your replication level?Can you see constant network use as well?Anything than might be triggered by the hbasemaster? (something like ... |
|
|
|
|
|
How to determine the value for spark.sql.shuffle.partitions? -
Spark - [mail # user]
|
...Not sure it would help and answer your question at 100%, but number ofpartitions is supposed to be at least roughly double of your number ofcores (surprised to not see this point in your lis... |
|
|
|
|
Parquet partitioning for unique identifier -
Spark - [mail # user]
|
...Any code / Parquet schema to provide? I'm not sure to understand which stepfails right there...On 3 September 2015 at 04:12, Raghavendra Pandey <[EMAIL PROTECTED]> wrote:> Did you s... |
|
|
|
|
Unable to understand error “SparkListenerBus has already stopped! Dropping event …” -
Spark - [mail # user]
|
...Hi there,I'd like to know if anyone has a magic method to avoid such messages inSpark logs:2015-08-30 19:30:44 ERROR LiveListenerBus:75 - SparkListenerBus has alreadystopped! Dropping eventS... |
|
|
|
|
|