slice is a fintech startup focused on India’s young population. We aim to build a smart, simple, and transparent platform to redesign the financial experience for millennials and bring success and happiness to people’s lives. Growing with the new generation is what we dream about and all that we want. We believe that personalization combined with an extreme focus on superior customer service is the key to build long-lasting relations with young people.
In this role, you will have the opportunity to create a significant impact on our business & most importantly our customers through your technical expertise on data as we take on challenges that can reshape the financial experience for the next generation. If you are a highly motivated team player with a knack for problem solving through technology, then we have a perfect job for you.
What you’ll do:
Work closely with Engineering and Analytics teams to assist in Schema Designing, Normalization of Databases, Query optimization etc.
Work with AWS cloud services: S3, EMR, Glue, RDS, Redshift
Create new and improve existing infrastructure for ETL workflows from a wide variety of data sources using SQL, NoSQL and AWS big data technologies
Manage and monitor performance, capacity and security of database systems and regularly perform server tuning and maintenance activities
Manage and process our Data Lake and ETL pipelines
Debug and troubleshoot database errors
Identify, design and implement internal process improvements; optimising data delivery, re-designing infrastructure for greater scalability, data archival
You’ll be responsible for taking a lead on the data projects backed by solid data points and facts.
You’ll be responsible for guiding the data team towards the company’s goals.
Data Projects You’ll be working on:
Slice Data-lake : A S3 based data warehouse built using streaming data pipelines and AWS Glue ETL processes and AWS Athena
Slice User Graph : A graph DB to be built and maintained on top of AWS Neptune Graph DB, for easy access to the info related to connected slice users
Data Pipelines : Dumping data from various sources like SQL, NoSQL, Ledger DBs to Data Lake and/or Redshift for Analytics use cases
Redshift Optimizations : Maintain and work towards keeping the Data warehouse healthy and suggest and implement optimizations
3+ years experience working as a Data Engineer
2+ years of experience in handling ETL Jobs
2+ years of experience with Spark and Hadoop technologies.
Good knowledge of OLAP Database warehouse designing.
Experience with a scripting language - Python preferably
Experience with SQL and NoSQL databases technologies like Redshift, MongoDB, Postgres/MySQL
Good to haves:
Experience with AWS big data tools EMR, AWS Glue, S3 is a plus.
Experience on Graph DB (Neo4j and/or OrientDB) AWS Neptune and Search DB (Elastic Search) is a plus