AWS Blu Age DevOps Pipeline - Setup Guide
If there is an issue with your Jenkins jobs, it is mostly because the application didn’t run properly. So after each deployment, you have to check if the application runs well and without error in the logs.
You can either check the logs in CloudWatch by viewing at the following log group: /ecs/app-<ENV> or you can connect to the EC2 and run some docker commands (from my point of view it’s easier and you have more visibility):
ec2-<AWS_REGION>-app-<ENV>Watch the logs:
sudo su
docker ps
docker logs -f <CONTAINER_ID>
Get inside the container:
You can also get inside the container to check if all your files have been well imported and well configured:
docker exec -it <CONTAINER_ID> bash
All the EC2s created with ECS use an Auto Scaling Group. It means that, you cannot modify the properties of an EC2 as easily as usual. You will have to modify the launch template of the ASG concerned. If you want to modify it:
You can click on the ID of your launch template in the Launch template section and modify the security group, IAM role or whatever you want regarding the properties of the EC2s.
Parameter Stores are often use to store variables such as path, DNS, Docker image versions and so on.
Here is a list of the parameter stores used and what they store:
/app/VARIABLES: Stores variables about the modernized application/jenkins/VARIABLES: Stores variables about Jenkins and tests running on it/lambda/slack-notifications/VARIABLES: Stores variables useful for this lambda/selenium/VARIABLES: Stores variables about Selenium/sonarqube/VARIABLES: Stores variables about Sonarqube/dop/VARIABLES: Stores configuration variables for customizing the CINote: Other than the /dop/VARIABLES, I recommend not modifying these variables but sometimes you may have to do it. So you need to know that most of the variables are dynamic
s, except docker image name and version variables. If you change them you will have to run thecodepipeline-docker-image-builderpipeline to apply the change.
DOCKER_IMAGE_NAME or VERSION variablescodepipeline-docker-image-builder to fetch the new image and push it on the ECRcodepipeline-app-<ENV> pipelines/dop/VARIABLESENVIRONMENTS variableNote: environment value must match an existing branch name from both the modern-application and server git repositories.
Doing so will trigger the codepipeline-building-pipeline-factory codepipeline, which will create necessary AWS resources for the new environment.
If your Gitlab repositories are corrupted or your lost your repositories, you may want to restore your repositories. The Lifecyle Manager backups the Gitlab EC2 twice a day except the week end. Here is how to restore your previous version of Gitlab.
gitlab ASGIMPORTANT: Do not forget to untick the boxes Launch and Terminate of the gitlab ASG
The EC2 needs to be terminated
- Terminate the Gitlab EC2 by changing the termination protection to false beforehand
- Let the Gitlab ASG recreate
sa new EC2Note: If you followed the method with no EC2 termination before this one. Do not forget to untick the Launch and Terminate boxes of the ASG if checked before
/dev/sdg device namesudo su
lsblk -o NAME,UUID,MOUNTPOINT # This command allows you to know the parition name of the /dev/sdg device name
xfs_repair -L /dev/<PARTITION_NAME> # Likely PARTITION_NAME=nvme1n1p1 according to the previous command
xfs_admin -U generate /dev/<PARTITION_NAME>
mount /dev/<PARTITION_NAME> /data
docker ps # Copy the CONTAINER ID
docker stop <CONTAINER_ID>; rm -rf /etc/gitlab/* /var/log/gitlab/* /var/opt/gitlab/*; cp -r /data/etc/gitlab/* /etc/gitlab/; cp -r /data/var/log/gitlab/* /var/log/gitlab/; cp -r /data/var/opt/gitlab/* /var/opt/gitlab/; chown -R ec2-user: /etc/gitlab; chown -R ec2-user: /var/log/gitlab/; chown -R ec2-user: /var/opt/gitlab/