Open An Account       Login       Locations       Help       Contact Us       Privacy
Kriddha Insured by TechInsurance, the Business Insurance Experts for IT Professionals
  Contact US
  Apply Here
  Current Openings
  LCA(Labor Condition Approvals)
  Locations
  Open An Account
  Login
  Help
  Testimonials
 
 
 
Apply Here
Software Engineer, Kriddha Technologies, Inc (Alpharetta, GA)
Develop ETL pipelines in and out of data warehouses using a combination of Python and Snowflakes SnowSQL Writing complex custom SQL queries against Snowflake. Develop Spark applications in Python (PySpark) on a distributed environment to load a huge number of CSV files with different schemas into Hive ORC tables. Convert Hive/SQL queries into Spark transformations using Spark RDD and Pyspark concepts. Create bash scripts to automate various small but time-consuming tasks such as user creation, importing multiple tables from RDBMS to HDFS using Sqoop. Expertise on Data Warehouses like Azure Synapse and Snowflake. Perform data migration from AWS S3 to Snowflake using Copy and perform Data Validation. Work on NoSQL supported enterprise production and loading data into Cassandra using Impala and Sqoop. Use Amazon EMR for Spark jobs and test locally using Jenkins. Use Jenkins and bitbucket for deployments into Dev, UAT and Production environments using CI/CD. Use GitHub and Git Bash to commit and push changes to the scripts developed as per the story requirements. Work on data coming from different sources including data in the form of fixed width files from SFTP Linux servers. Work on Hadoop clusters deployed in the Linux environment. Involve in end-to-end development and automation of new ETL pipelines using SQL and Pyspark. Compile and execute programs as necessary using Apache Spark in Scala to perform ETL jobs with ingested data. Transform imported/exported data from various databases ORACLE, and MYSQL into HDFS using Talend. Work on On-Prem Datawarehouse migration to Azure Synapse using ADF. Design and develop ETL processes in AWS Glue to migrate Campaign data from external sources like S3, ORC/Parquet/Text Files into AWS Redshift.

Require:

Master’s degree in Computer Science, Information Technology Management or closely related IT field. One year of experience as a Software Engineer, Bigdata Hadoop Developer, ETL Developer or similar. Working knowledge of Python, Spark, Scala, Hadoop, Hive, SQL; RDBMS, MySQL, Snowflake, Azure Synapse, Cassandra; ServiceNow; Jenkins, bitbucket, github; Linux, sftp servers; AWS and Azure. Work assignments may be in various unanticipated work locations in the US.  

 

Apply by resume to:

snigdha@kriddha.com

       
  Job ID :
  Name :
  Contact No :
  E-mail ID :
  Paste Resume :
  Attach Resume :