The purpose of this documentation is to explain how to use the pipelines. This is not a technical documentation. If you feel that you want more explanation about the pipelines when you reach the end of this documentation (see How do pipelines work?)
CodeCommit is deprecated and won’t be used anymore by AWS Blu Age DevOps Pipeline. A standalone Gitlab server has been deployed and is only reachable within the VPC.
Open https://gitlab.bluage.local
gitlab-devops: This gitlab repository contains AWS Blu Age DevOps Pipeline scripts and config files. Developers don’t have to modify this directory.gitlab-<PROJECT_NAME>-modern-application: This repository contains the modernized application code used by developers. This is the one AWS Blu Age DevOps Pipeline uses for building the project.gitlab-<PROJECT_NAME>-server: This repository contains the tomcat config files, groovy scripts and other stuff for the modernization server. This is the one AWS Blu Age DevOps Pipeline uses for building the project.gitlab-<PROJECT_NAME>-test-cases: This repository contains the test case config files used by developers. This is the one AWS Blu Age DevOps Pipeline uses for building the project.
The building pipelines are the pipelines that build the project and deploy the application on ECS. Initially, we only deploy the pipeline for the branch master of the project. As of a user, you will be able to deploy pipelines for other environments.
The initial building pipelines is called:
codepipeline-app-mastergitlab-<PROJECT_NAME>-modern-application and gitlab-<PROJECT_NAME>-server repositoriescodepipeline-app-master is related to the master branch.
/dop/VARIABLESENVIRONMENTS array by adding the name of your new branchgitlab-<PROJECT_NAME>-modern-application and gitlab-<PROJECT_NAME>-server repositories before changing the Parameter Store.codepipeline-app-<BRANCH_NAME>codepipeline-jenkins to take into account your modification on JenkinsNote: This change will trigger the
codepipeline-building-pipeline-factorypipeline creating a new testable environment.Parameter Store
All variables in the
/dop/VARIABLESparameter store are editable by developers.
You can trigger the pipeline in 2 ways:

Note: If you don’t see your pipeline you can maybe search for it in page 2 or further
Click on the Release Change button and then Release.
For example, pushing some code on the master branch of the gitlab-<PROJECT_NAME>-modern-application repository will trigger codepipeline-app-master.

Note: If implemented, you will receive a message on a slack channel as well (common slack name: bluage-notifications-
How to connect to the modernized application?
To access the interface of the application, connect on your dev windows EC2 inside your project’s VPC. Depending on the environment you want to connect, here are the URLs:
The pipeline will fetch the sources in the s3-<AWS_REGION>-<PROJECT_NAME>-velocity-<ACCOUNT_ID> S3 of your project account. So as a developer, you will have to put your binary files (.war and .jar files) in this S3. The CDK scripts deploy the following tree structure in the s3 and other directories won’t be taken into account.

The pipeline will fetch the .jar and the .war files in velocity/runtime and velocity/webapps.
If you want some patches you can put your .jar files in velocity-patches/runtime and your .war files in velocity-patches/webapps.
WARNING:
.warfiles invelocity-patchesoverride those fromvelocity/webappsso they need to have the exact name. It’s not the case for.jarfiles
In the classpath directory put all the .jar files that you want to be accessible from the CLASSPATH variable. As a developer, you won’t have to configure the setenv.sh file, the pipeline takes care of it. However, if you want to modify it, check gitlab-devops/codebuild/codebuild-app/config/tomcat/bin/setenv.sh
IMPORTANT: If you want the application to be chargeable you have to put the runtime of Bluage on EC2 in the S3
You can get the AWS Blu Age on EC2 runtime with this command:
aws s3 cp s3://aws-bluage-runtime-artifacts-dev-<AWS_ACCOUNT_ID>-<AWS_REGION>/<GAPWALK_VERSION>/Framework/aws-bluage-runtime-<GAPWALK_VERSION>.tar.gz .
tar -xvzf aws-bluage-runtime-<GAPWALK_VERSION>.tar.gz
In the bin folder, you will put the bluage.bin file
From now on, DevOps and Devs will have the same Tomcat configuration files. The config and conf files of the Tomcat configuration will be fetched from the gitlab-<PROJECT_NAME>-server repository with the following tree structure:

Read Setup the AWS Blu Age DevOps Pipelines requirements to learn how to fill correctly this s3, how to configure the application and more.
Logs are directly available on CodePipeline:
Note: The status of the deployment can be green but you might encounter issues in the modernized application deployment logs. So please check for the status of all the actions in the AppDeployment stage.
CloudWatch
Logs are available in CloudWatch:
Search for the log group called /ecs/app-<BRANCH_NAME> or /ecs/app-<BRANCH_NAME>
Yes, all the .war files are stored in the following s3 of your account project:
s3-<REGION>-app-pipelines-artifacts-<ACCOUNT_ID>You can find in this S3 the .war files generated by the pipeline and those fetched from the Velocity S3 or the gitlab-<PROJECT_NAME>-server repository depending on the configuration in the pom.xml.
Sonarqube is only reachable from EC2s being inside your project’s VPC, so you can use your dev windows EC2 to connect to it. Sonarqube can be executed on all environments. For that you can modify the RUN_SONARQUBE variable by true or false when running the building pipeline of this specific environement. When the pipeline ends you will be able to see the report by connecting on the following URL:
Note: The credentials for Sonarqube are in AWS Secret Manager.
The Gitlab repository used for putting test case config files is gitlab-<PROJECT_NAME>-test-cases.
The tree structure of this repository is essential as it determines the good working of your test cases and what will appear in Jenkins later. You cannot change the name of these directories.
See below a diagram of the tree structure:

Note: BTC test case directories are not represented but have the same tree structure as ITC test cases.
All the test cases have the same tree structure, no matter if it’s a BTC or ITC test. See below a zoom into the structure of a specific test case directory:

Note: Red boxes are directories, white boxes are files
Note2:
ITC_i/qdds/expectedhas the same tree structure thanITC_i/qdds/initial. However, today we use snapshots so qdds directory is deprecated.
The naming convention is crucial here:
all.ini and config.ini must be named this wayImportant: You are not forced to have the below directories in your test case. However if you create one then you have to respect the tree structure under this directory (as shown on the diagram above):
Exception: The qdds directory can contain the initial or expected directory or both.
Mandatory: However, the date and database directories and their inner files are mandatory for running every test case.
Config file examples
snapshots.yml
APP_DB_SNAPSHOT_NAME: <SNAPSHOT_NAME>
APP_DB_TECHNO: <DB_TECHNO>
APP_DB_INSTANCE_TYPE: <EC2_INSTANCE_TYPE>
Note:
APP_DB_TECHNOis the type of database you want to use. Refer to RTS techno typesentrypoints.yml
# EntrypointName: Timeout (seconds)
ENTRYPOINTS:
- <ENTRYPOINT_NAME_1>: <TIMEOUT_1>
- <ENTRYPOINT_NAME_2>: <TIMEOUT_2>
- ...
Note: ENTRYPOINTS can have as many entrypoints as you want
If your .bacmp file refers to an input file in the details section, it must not contain the extension (e.g .bin). See example below:
{
"comparisonType": "Binary",
"binary": {
"details": {
"O.F.PC04": { # No extension!
"fileformatName": "ff_19.fileformat"
}
},
"folders": {
"leftFolder": "output",
"rightFolder": "expected",
"fileformatFolder": "config"
},
"splitRecordFormats": true
},
"comparaisonDataStorage": "InRam",
"doublePrallelisationEnabled": true
}
Note : leftFolder, rightFolder and hostName for DB comparison are modified by the pipeline before running the test case
.spec.ts
In your playwright files, it is really important that you don’t commit the .spec.ts file using hardcoded server IP inside. Every time, you will have to specify the server DNS, you need to give the APP_URL environment variable as follows:
await page.goto(`${process.env.APP_URL}`);
Note: AWS Blu Age DevOps pipelines will modify on the fly this variable (
APP_URL) in the Jenkins test for the CI needs.
When calling playwright manually on your EC2, you can give this variable as a parameter:
APP_URL=https://app.bluage.local/<BRANCH_NAME> SELENIUM_REMOTE_URL=https://selenium.bluage.local npx playwright test
Note: The name of your file has to finish with
.spec.tsfor being taken into accountIMPORTANT: When giving the SELENIUM_REMOTE_URL variable you are forced to write https:// - otherwise you will get a connect ECONNREFUSED error
IMPORTANT2: If the port of the app is not 80, add the port to the EC2_DNS variable
Video on failure
The video will be available in the Jenkins reports only on failure.
Note: If you want to take screenshots on failure, read the following: https://github.com/microsoft/playwright/issues/14854
Your entrypoint will be in success if the output is a JSON with the status field to “Succeeded” or the HTML output is “Done.”
Inside the efs-test-cases, you will have to put your input and expected files for your BTC test cases by following this tree structure:

IMPORTANT: your input and expected files must not have any extension in the EFS!!
Note: As a developer, what inside jobs and workingDir directories should not be modified
Note2:
BTC_iis the name of your BTC as you wrote it in thegitlab-<PROJECT_NAME>-test-cases repository. Name of the test case without the env.
As a developer, you won’t trigger the Jenkins pipeline but you need to know how it works. The name of the pipeline is:
codepipeline-app-jenkinsThis pipeline is triggered every time there is a commit on the gitlab-<PROJECT_NAME>-test-cases repository. If you commit a new BTC or ITC job, that is, you created a new repository in the BTC or ITC directories, the job will be automatically added to Jenkins and duplicated for each environment. You will have to wait the end of the pipeline to see the new job appeared in Jenkins. The pipeline takes roughly a couple of minutes.
However, if this is just an update of your job, that is, the job has already been created before but your new commit just modified a config file, then once you committed, you are not forced to wait for the end of the pipeline to run your job on Jenkins as every Jenkins job clone the Gitlab repository when running a job.
Jobs are executed every night at 8pm (CET time), except on Friday and Saturday nights.
To access Jenkins, you have to use your developer Windows EC2 inside the VPC of your project.
Note: If you don’t have any account ask your testers or project manager to create a account for you. Please do not create a guest user.
IMPORTANT: If it is your first connection, change your password by clicking on your name on the top right of the Jenkins page, then configure on the left panel and at the very bottom of the page fill in the password field.
How to navigate in Jenkins?
At your first connection, you will land on this page

You can notice 2 views:
modern-application gathers all the test cases related to your projectClick on the modern-application view, select your type of test case ITC or BTC and then your environment.
Note: All your test cases in the Gitlab repository will be deployed for each environment in Jenkins. The same test case depending on the environment will be not executed on the same application.
Once you get to this page, you can then start any job you want clicking on the green arrow.

Note: When you run a job it will ask you some information. Depending on what you want to run, check the corresponding boxes. For example, check RESTORE_DB if you want the initial and expected DB to be restored.
See the execution report
Once your job is running you can click on its execution number

Note: The HTML Report on the test case main page only refers to the last execution
You can then have access to the logs and the report

Note: Each test case keeps the last 30 executions and their report
Jenkins Status
Your test case can end with 3 different status

Your test case returns an exit code different of 0. It fails somewhere.

Your test case returns an exit code different of 0 but the test already passed in the past.

Your test case returns 0 as exit code without error.
Here are the phases executed while running a test case:
Note: Of course, most of the phases are optional and will not be executed if one directory in your test case (qdds, entrypoint, compare, side) presented in the tree structure chapter is missing
How to add another test type in Jenkins?
Merely modify the tree structure of the gitlab-<PROJECT_NAME>-test-cases repository
This job will restart the static servers, that is, the ones created by the building pipeline.
Click on the Pipelines view, click Restart App Server

Click Build with Parameters and select your environment.

Cleaner is a job that you will never have to execute. This job is executed every night and delete ECS containers, EFS directories, S3 directories that could be residual when running a job. (Everything older than 6 hours is deleted)
Note: Sometimes some ECS containers or files are not deleted properly in a job, that why’s this job exists
Restart App Server
The Restart App Server job restarts the
app-<BRANCH_NAME>static server.Job Runner
Job Runner is a job allowing us to run all the jobs for a specific type of test cases and environment. For example, you want to run 100 BTC tests on master, either you execute them one by one (and this is annoying) or you run this job by selecting your type,
BTC, and your environment,master, and tadaaaaa, all the jobs are run at once.Jacoco reports
The Jacoco reports is a job to generate the complete Jacoco reports of all the test cases. Basically, it merges all the Jacoco report files of the latest build of each job and generates a report.
Be careful: If a job is running it will take the previous Jacoco report. However, if a job failed but succeeded in the past it will consider the Jacoco report as missing.