Required Minimum Experience: 6+ years
Mandatory Skill Set: Worked for at least 2-3 yrs on AWS , Python, DBT, Airflow, Snowflake , Strong SQL (minimum 5 yrs of experience )
About Us
CLOUDSUFI is a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Job Description
CLOUDSUFI is seeking a Data Engineer to work on scalable software solutions. You’ll be part of a cross-functional team that’s responsible for understanding data systems as well as optimizing data workflows for performance and usability. This individual will assist with leading and managing the design, development, and maintenance of data infrastructure and pipelines within an organization. These professionals are responsible for ensuring the reliability, scalability, and efficiency of data systems
Responsibilities
- Consults with Product/Program Manager to identify minimal viable product and decomposes feature set into small scoped user stories.
- Set Functional product direction for the application development and cascade the designs to offshore development team.
- Design, Deploy, manage, and operate scalable, highly available, and fault tolerant ETL / BI / Big Data / Analytics systems on AWS.
- Implement and control the flow of data to and from AWS.
- Set up connectivity between on–Premise and AWS with appropriate security configurations.
- Select the appropriate AWS service based on compute, data and security requirements.
- Identify appropriate use of AWS operational best practices.
- Participate in Daily Scrum calls, Scrum Planning, Retro and Demos meetings.
Required Experience
- Experience in data processing using Databricks on AWS and experience in data lake implementation.
- Hands on experience in object-oriented languages like Python, Scala, Shell or Java
- Strong experience in SQL
- Experience in data warehouse or data lake implementation using any of the cloud providers like Azure, AWS, GCP
- Experience in batch processing/real time implementations using Sqoop, Kafka, Hadoop, Spark, Hive, etc
- Experience in data quality management and best practices across data solution implementations
- Consults with Product/Program Manager to identify minimal viable product and decomposes feature set into small scoped user stories
- Set Functional product direction for the application development and cascade the designs to offshore development team
- Design, Deploy, manage, and operate scalable, highly available, and fault tolerant ETL / BI / Big Data / Analytics systems on AWS
- Implement and control the flow of data to and from AWS
- Set up connectivity between On–Premise and AWS with appropriate security configurations
- Select the appropriate AWS service based on compute, data and security requirements
- Identify appropriate use of AWS operational best practices
- Participate in Daily Scrum calls, Scrum Planning, Retro and Demos meetings
Behavioral competencies
- Should have very good verbal and written communication, technical articulation, listening and presentation skills.
- Should have proven analytical and problem-solving skills.
- Should have demonstrated effective task prioritization, time management and internal/external stakeholder management skills.
- Should be a quick learner, self-starter, go-getter and team player.
- Should have experience of working under stringent deadlines in a Matrix organization structure.