- Oct 2020
-
github.ibm.com github.ibm.com
-
-
appcon-jenkins.swg-devops.com appcon-jenkins.swg-devops.com
-
appcon-jenkins.swg-devops.com appcon-jenkins.swg-devops.com
-
Subprojects
there are 3 subprojects here, these are all "blocking" meaning that if one of them fails, then successful project don't get run.
also the 1 and 3 subprojects actually run the same job. Which is copying things from one place in artifactory to another.
copies tar with these parameter SOURCE_STAGE=master/latest TARGET_STAGE=master/in-flight/demo
copies tar with these parameters - SOURCE_STAGE=master/in-flight/demo TARGET_STAGE=master/known-good
-
This job gets triggered by the the final stage (create catalog manifest and CASE files) of the https://appcon-jenkins.swg-devops.com/job/ibm-appconnect-operator/ job.
It basically does the following.
- It takes the "latest" ibm-appconnect-operator
- test if olm install works against it, and if so 3 . then copies it into the known-good
To learn more, see https://github.ibm.com/Cloud-Integration/ibm-appconnect-operator/blob/master/docs/OperatorPackagingPipeline.md
-
-
appcon-jenkins.swg-devops.com appcon-jenkins.swg-devops.com
-
Build
this has 3 scripts.
First script This essentially runs the script - $WORKSPACE/appconnect-pipeline-tools/bootstrap-pipeline-tools.sh
This in turn curls down - https://na.artifactory.swg-devops.com/artifactory/webapp/#/artifacts/browse/tree/General/appconnect-firefly-apps-generic-local/known-good/artifacts/appconnect-pipeline-tools.tar.gz
Creates the folder /tmp/pipeline-bootstrap-temp, then extracts this tar into this folder.
then rename this folder to $WORKSPACE/ppl-tools, i.e. /home/jenkins/workspace/cp4i-promote-selective-artifacts/ppl-tools
Second Script This ends up running the "CMD" command of - node /home/jenkins/workspace/cp4i-promote-selective-artifacts/ppl-tools/pipeline.js promote master/in-flight/demo master/known-good --sourceRepo appconnect-serenity-generic-local --targetRepo appconnect-serenity-generic-local ibm-appconnect-operator,appconnect-deployment-scripts,cp4i-deploy-operator
-
Source Code Management
-
-
appcon-jenkins.swg-devops.com appcon-jenkins.swg-devops.com
-
-
appcon-jenkins.swg-devops.com appcon-jenkins.swg-devops.com
-
create catalog manifest and CASE files
In this stage we create + upload the ibm-appconnect-operator.tar.gz archive.
unstashes 'repo', 'operatormanifestinfo' 'bundlemanifestinfo' 'csv' So that we have access to operator, operator-init, and bundle fat manifest digests. Also access to csv and operator.yaml file.
run the scrpt create-operator-archive.sh which in turn runs the create-catalog-manifest.sh script, which in turn does:
Downland cp4i-operator-bundle-tools.tar.gz from Artifactory and extract it jenkins-build-scripts/cp4i-operator-bundle-tools, that's so to use the push-images-to-er.sh script later on.
call the create-manifest-image-from-platform-images.sh script, which in turn creates the appconnect-operator-catalog fat manifest with the tag ${VERSION}-${BUILD_TIMESTAMP} and pushes it up to appconnect artifactory. it also creates the OperatorCatalogDigest file.
returning back to create-catalog-manifest.sh, update the deploy/catalogsource.yaml file with the new appconnect-operator-catalog fat manifest's digest value
calls the script push-images-to-er.sh in order to copy appconnect-operator-catalog fat manifest to staging Entitled Registry. copied across using tag->digests, then copied across digest->tags. then copied across arch-specific-tags->arch-digests (to enable va scanning).
back in create-catalog-manifest.sh script, in staging ER, assign "latest" to the newly uploaded catalog fat manifest, ${VERSION}-${BUILD_TIMESTAMP}->latest
Returning back to create-operator-archive.sh,
feed the stashed goodimages.json to create-resources-yaml.sh script. This in turn generates the resource.yaml file under the "case" folder.
Curl the cp4i-operator-bundle-tools.tar.gz from artifactory and put it in the jenkins-build-scripts folder. This folder will get ignrore a bit later.
Download cp4i-deploy-operator.tar.gz from artifactory and extract it in the stable/ibm-appconnect-bundle/tests folder
copy the airgap.sh and put it in the stable/ibm-appconnect-bundle/operators/ibm-appconnect/scripts/ folder
copy all "PROD" goodimages.json images into staging ER.
update "latest" tag to now point to the new uploaded gooimages.json images in staging ER.
Also copy some "dev" goodimages.json images into staging ER. That's so that developers can requests images get pull from dockerhub, but actually get pulled from staging ER thanks to openshift registry redirect.
copy the dev images above the /appc bit. so dockerhub urls don't have to specify /appc. Also retag to latest.
Create ibm-appconnect-operator.tar.gz archive. but exclude:
--exclude "${APP_NAME}/jenkins-build-scripts"
--exclude "${APP_NAME}/.travis.yml"
--exclude "${APP_NAME}/.git"
back in jenkinsfile, upload this archive to artifactory in the "latest" folder, this will overwrite what is already there.
Update jenkins job description with info about what has been built.
triggers the job - test-and-promote-cp4i-demo-system. but it doesn't wait for it to succeed.
resulting files:
- resources.yaml
-
Get build source
This is a walkthrough of - https://appcon-jenkins.swg-devops.com/job/ibm-appconnect-operator/2640/consoleFull
- clones git@github.ibm.com:Cloud-Integration/ibm-appconnect-operator.git
- Creates the file - OperatorImageTag
- Updates job description (build history panel on LHS)
- Downloads goodimages.json from Artifactory
- clones cp4i-operator-bundle-tools.git
- clones git@github.ibm.com:IBMPrivateCloud/cloud-pak-airgap-cli.git
- stashes with name "repo"
-
amd64 bundle image creation
- unstashes what got stashed in "create operator manifest image" stage (main reason being is that it needs the updated operator.yaml file)
- runs the create-operator-bundle.sh script.
2.1. generates the clusterserviceversion.yaml file by running the
make csv
command.2.2. run the "operator-sdk bundle create" to create the bundle image.
2.3 push up the image to appconnect artifactory
- stash repo with the name "csv". This limits to only stashing the newly generated clusterserviceversion.yaml file ( contains the same digests as in the operator.yaml)
-
create operator manifest image
runs the create-operator-manifests.sh script:
- Downloads cp4i-operator-bundle-tools.tar.gz from artifactory and extracts it content puts it into the jenkins-build-scripts folder, in it's own folder called "cp4i-operator-bundle-tools". Path to this folder is $BUNDLE_TOOLS_DIR. We'll use this archive's push-images-to-er.sh script a bit later on.
- Runs the script create-manifest-image-from-platform-images.sh using appconnect-operator as script parameter
2.1 pulls down each arch image using ${VERSION}-${BUILD_TIMESTAMP}-${arch} tags
2.2 create docker manifest with tag - ${VERSION}-${BUILD_TIMESTAMP}. And add both entries.
2.3 push up operator manifest to appconnect artifactory
2.4 Create the OperatorImageDigest file
- do the same for the init image. Which results in the creation of the OperatorInitImageDigest file.
- run the script - push-images-to-er.sh to copy both fat manifests to staging ER. It copies using fat manifest digests, followed by tags. It also copies across arch tags for individual images.
Update the operator.yaml file with the fat manifest digests of both appconnect-operator and init image.
create new stash called operatormanifestinfo. This specifically specifies the 2 new top level digest files and the changes made to operator.yaml (with the same digests)
-
Create Manifest image from Bundle
This stage should be called: Create bundle Fat Manifest
1.a Unstsashes "repo" - not sure why 1b. unstashes "operatormanifestinfo" - not sure why
This runs the script - create-bundle-manifest.sh ...which does:
- Downloads cp4i-operator-bundle-tools.tar.gz from artifactory and extracts it content puts it into the jenkins-build-scripts folder, in it's own folder called "cp4i-operator-bundle-tools". Path to this folder is $BUNDLE_TOOLS_DIR.
- Runs the script create-manifest-image-from-platform-images.sh using appconnect-operator-bundle as script parameter
2.1 pulls down each arch image using ${VERSION}-${BUILD_TIMESTAMP}-${ARCH} tags.
2.2 create docker manifest and add both entries.
2.3 push up fat manifest to appconnect artifactory
2.4 Create the OperatorBundleDigest file containing the fat manifest's digest value.
- stash repo with name "bundlemanifestinfo", that's so to save the OperatorBundleDigest file.
-
amd64 catalog image creation
- unstash's the previous stage's repo - bundlemanifestinfo.Because it needs access to the OperatorBundleDigest file.
- Run the script - create-operator-catalog.sh 2.1. run the "opm index add ..." command to create the appconnect-operator-catalog:${VERSION}-${BUILD_TIMESTAMP}-${arch} image 2.2. push up the image to appconnect artifactory
No new files created/updated/deleted in the workspace in this stage.
-
amd64 build and test
Executes the create-operator-image.sh script:
- run
**make build**
using the goodimages.json download in the previous stage. 1.1 Builds both the ibm-appconnect-operator and ibm-appconnect-operator-init images. - Test those images by pushing them up to a e2e cluster
- if e2e testing passes, then push up both images to appconnect artifactory with the tags ${VERSION}-${BUILD_TIMESTAMP}-${arch} and 'latest-${arch}'
This stage doesnt create/update/delete files in the workspaces. It does make some changes as part of e2e testing, but they get reset again, since they didn't get stashed.
- run
-
-
www.bbc.co.uk www.bbc.co.uk
-
aterials policy initially blocked Twitter u
test
-