Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # dev >> FindBugs: INTEGER_MULTIPLY_CAST_TO_LONG question


Copy link to this message
-
Re: FindBugs: INTEGER_MULTIPLY_CAST_TO_LONG question
We copied this block cache code from HBase a while ago.  Need to take
a look at the changes in HBase and see if we should update our code.

On Sat, Sep 8, 2012 at 8:40 AM, David Medinets <[EMAIL PROTECTED]> wrote:
> accumulo.core.file.blockfile.cache.LruBlockCache, line 656
>
>   public static long calculateOverhead(long maxSize, long blockSize,
> int concurrency) {
>     return CACHE_FIXED_OVERHEAD + ClassSize.CONCURRENT_HASHMAP +
> ((int) Math.ceil(maxSize * 1.2 / blockSize) *
> ClassSize.CONCURRENT_HASHMAP_ENTRY)
>         + (concurrency * ClassSize.CONCURRENT_HASHMAP_SEGMENT);
>   }
>
> This code is flagged as:
>
>  ICAST_INTEGER_MULTIPLY_CAST_TO_LONG: Result of integer multiplication
> cast to long
>
> This code performs integer multiply and then converts the result to a
> long, as in:
>
>     long convertDaysToMilliseconds(int days) { return 1000*3600*24*days; }
>
> If the multiplication is done using long arithmetic, you can avoid the
> possibility that the result will overflow. For example, you could fix
> the above code to:
>
>     long convertDaysToMilliseconds(int days) { return 1000L*3600*24*days; }
>
> or
>
>     static final long MILLISECONDS_PER_DAY = 24L*3600*1000;
>     long convertDaysToMilliseconds(int days) { return days *
> MILLISECONDS_PER_DAY; }
>
> ----
>
> 1. Can more of the value involved (say, ClassSize.CONCURRENT_HASHMAP) be longs?
> 2. How would this method be changed to resolve this message?
> 3. Can the multiplication steps be assigned to reasonably named
> variables to self-document what is being added.
> 4. Can a comment be added why 1.2 is being used?
> 5. The overhead of what is being calculated?
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB