User talk:Sandishkumar

From Wikipedia, the free encyclopedia
Jump to: navigation, search

SandishKumar HN|

SandishKumar HN.
File:HDFS cartoon.png
Born SandishKumar HN
(1990-02-21) 21 February 1990 (age 27)
Bangalore, Karnataka, India
Residence Bangalore, Karnataka, India
Other names Sandy,Sany,Chinni
Occupation Software Engineer, Software Developer,Cloude Developer
Years active 1990-present
Spouse(s) Single
Parent(s) MohanKumar HN,Parvathi HN

Sandish Kumar H N is a Cloude Developer at Positive BoiSience and he Works mainly on BigData Analitics,Cloude Computing,Amazon AWS,Hadoop,HadoopEchosystem --SandishHadoop (talk) 08:20, 22 January 2013 (UTC)SandishHadoop

Education and early career[edit]

He holds a Diploma in Computer Science from Department of Technological Education University, Bangalore.

He holds a bachelor's degree from Vishveshwarya Technological University, Bangalore.

Sandish held Software technology positions at PointCross, Positive BioSience.


DDSR (Drilling Data Search and Repository)[edit]

  • DDSR project aims to provide analytics for Oil and Gas exploration data. This DDSR repository builded by using HBase, Hadoop and its sub projects. We are collecting thousands of wells data from across the globe. This data is stored in Hbase and Hive by using Hadoop MapReduce jobs. On top of this data we are building analytics for search and advanced search.

SDSR (Seismic Data Server & Repository)[edit]

Our Seismic Data Server & Repository solves the problem of delivering, on demand, precisely cropped SEG-Y files for instant loading at geophysical interpretation workstations anywhere in the network. Based on Hadoop file storage, Hbase™ and MapReduce technology, the Seismic Data Server brings fault-tolerant petabyte-scale store capability to the industry. Seismic Data Server supports post-stack traces now with pre-stack support to be released shortly.

DataMining on Wikipedia Data(DWD)[edit]

The main aim of this project is to understand the MapReduce execution framework in detail on larger data sets and to tune the MapReduce jobs running on cluster using Combiner,configuration tuning parameters like block size, sort factor etc. Used Hadoop UI to monitor the jobs running on cluster.


External links[edit]

  1. An Sandish Blog
  2. A Sandish linkedin Profile