Save time and effort sourcing top tech talent

AWS Data Engineer

Fairfax, VA, USA
Data Engineer
CGI
Actively hiring

Sign up for the chance to get matched to this role, and similar opportunities.

The data engineer will apply software engineering best practices throughout their work and leverage the latest cloud technologies to innovate and provide value to our customers.
This position is located in our Lafayette, LA office; however, a hybrid working model is acceptable.


Your future duties and responsibilities
Our AWS Data Engineer will be a key contributor with the below responsibilities: • Work with the technical development team and team leader to understand desired application capabilities.
• Work with clients and legacy systems to setup ingestion pipelines that pull new and updated artifacts into knowledge bases. • Continuously improve machine learning models
• Design and apply data architectures that enable both field-based search and semantic search.
• Understand conflicting artifacts and prioritize accurate or more recent artifacts within the knowledge bases. • Work within and across Agile teams to test and support technical solutions across a full stack of development tools and technologies. • Develop applications in AWS - data and analytics technologies including, but not limited
to: OpenSearch, RDS, S3, Athena; Lambda, Step Functions, Glue; Sagemaker, Textract, Comprehend, Bedrock, AI Chatbot/Lex; SQS, SNS; CloudTrail, CloudWatch; VPC, EC2, ECS and IAM. • Application development by lifecycles, & continuous integration/deployment practices. • Work to integrate open-source components into data-analytic solutions. • Working with vendors to enhance tool capabilities to meet enterprise needs. • Willingness to continuously learn & share learnings with others.


Required qualifications to be successful in this role
• Proficient in navigating the AWS console and programmatic interaction with AWS through the AWS Python SDKs and AWS CLI. • Hands on experience with AWS services, such as: RDS, S3, Lambda, Step Functions, Glue, SQS, SNS, CloudTrail, CloudWatch, VPC, EC2 and IAM. • Proficiency in Python: Data structures, writing custom classes/modules, object-oriented code organization, data extraction/transformation/serialization, database/API interaction, creating virtual environments, and the AWS Python SDK.
• Deep experience with troubleshooting complex end-to-end data processing issues whose causes may stem from: library code, workflow logic, API inconsistencies, network issues, corrupt data, out-of-order updates, and AI hallucinations.
• Hands on experience with high-volume data application development and version control systems such as Git. • Hands on experience in implementing data ingestion processes incorporating ETL processes. • Hands on experience in data modeling and relational database design of large datasets • Knowledge of application development lifecycles, & continuous integration/deployment practices. • 7-10 years experience delivering and operating large-scale, highly-visible distributed systems. • Knowledge of IaC using terraform is preferred.
*Agile development experience
Desired qualifications/non-essential skills required:
DevOps practices: IaC for pipelines, pipeline monitoring and logging, code versioning, data versioning, container writing • Experience working with the Atlassian toolset (Jira, Confluence) • Hands on experience with reading and writing into DynamoDB or other NoSQL databases; Redshift • API design; API Gateway Experience • ElasticSearch/OpenSearch Experience
*AWS Certifications
*Agile or SAFe Certificat

Sign up for the chance to get matched to this role, and similar opportunities.

Upskill

Level up the hackajob way. Verify your skills, learn brand new ones and test your ability with Pathways, our learning and development platform.

Ready to reach your potential?