Blog

Here are some announcements we are proud to share with you!

AWS Blu Insights - Say hello to Calibration scoping

The AWS Blu Age modernization process follows multiple steps. First, the customer provides legacy artifacts which are analyzed during the Assessment phase. This phase helps us understand the size, the content and the potential challenges of the codebase, and prepare the next step, the calibration. The purpose of this phase is to involve the customer in an initial small project, clarifying their role in the modernization process and showcasing the capabilities of the AWS Blu Age solution, prior to modernizing the entire codebase. In the Calibration phase, we leverage the results of the Assessment to select a representative sub-scope of the full project. This allows us to discover the technical implementations, choices of design, specificities of this codebase earlier rather than throughout the AWS Blu Age modernization process. After this phase, the Mass Modernization can begin. During this phase, we modernize and migrate the entire project scope. Good choices for the calibration scope will result in a smoother mass modernization phase, reducing the overall cost and time. What is a good calibration?The calibration scope selection must be defined with the goal of testing the highest number of different technical functionalities of the legacy application, focusing on the previously detected challenges, using the lowest number of lines of code. To make this selection as effective as possible, it's important to consider different criteria. An optimal scope should: contain testable and independently runnable features.include some of the challenges (errors, libraries, non-sweet spot code...), and represent the technical functionalities;leverage the dependency analysis to include the most central legacy artifacts.We introduce the AWS Blu Insights Calibration Scoping as an assistant, which will help define this optimal scope. The main purpose is to determine a score for each testable feature in a codebase and, taking into account the desired lines of code, suggest a range of highly scored features to be included in the calibration scope. A few metrics are computed to help accomplish this goal. How do we compute a calibration scope?Criteria 1: Testable and independently runnable featuresThe calibration scope definition relies on the outputs of the Application Entrypoints and Application Features processes. The result of the decomposition of the Application Features process provides a set of testable features upon which will be computed the metrics, and the calibration scope will be an aggregation of some of them.Criteria 2: Technical functionalities and pain pointsAt a broad level, a legacy program incorporates technical functionalities from various categories such as file management, database accesses, mathematical operations, and more. To encompass different families, we propose the concept of Category and Code Pattern for file statements, which will be linked to specific families. With this Code Pattern notion, we can compute two metrics: Coverage: The percentage of code patterns types present in the file relative to all the code patterns types in the application features.Rarity: Value indicating the mean rarity of the code patterns in the file (i.e. the ratio of the number of occurrences of the code pattern in the file to the number of total occurrences of the code pattern in all the files).When a legacy artifact cannot be modernized by the AWS Blu Age Transformation Engine, these metrics cannot be computed. This information is taken into account in the analysis, in the Error Category.Criteria 3: Central legacy artifactsFrom the dependency analysis of AWS Blu Insights Codebase, we introduce the notion of Centrality, which is a value computed from the links (inbound and outbound) between a given file and other files, influenced by its neighbors’ values. The higher the centrality is, the more central the node is on an application, and the higher the score will be for the Application Features in which it belongs. How to run the calibration assistantAlthough we tried to simplify the technical concepts behind a calibration in the previous sections, they remain complex. The good news is that as a user of AWS Blu Insights, you don’t need to understand or handle those details. The Calibration assistant automates the process.Launch a Calibration The entry point of this new feature “Calibrate” is located in a Transformation Center project, on the Inputs view (... menu).When launching a Calibration process, a parent Workpackage must be provided by the customer. This workpackage can result from the Application Features process, but can also be tailor made based on other inputs. Its children workpackages will be used to consolidate the calibration metrics of each artifacts composing them. Calibration runOnce the Calibration Run has been created, it is accessible from the Transformation Runs view. By selecting it, a Calibration button is displayed on the bottom banner, which leads to a new Calibration view. Calibration viewThe main element of this new page is a table, on which each line represents a workpackage with all the metrics previously computed. The status of the Transformation for each workpackage is displayed as a status bar representing the percentage of success/warning files modernized within it. When selecting one or multiple workpackages, the coverage percentage and total included lines of code displayed on the bottom banner. Also, the remaining coverage and current calibration value for each workpackage are updated accordingly in the table by updating the columns associated with this metrics. All elements are gathered in this view to choose a calibration scope, by selecting high scored workpackages and looking at the different constraints and metrics (lines of code, remaining calibration value,...). Categories viewAnother view is available, which shows all the categories that were detected during the transformation. As for the previous view, the user can select workpackages, which will also be reflected in the previous view. Auto scopingTo further help in choosing a calibration scope, we introduce the auto scoping process. This process takes the different metrics computed during the transformation step, as well as some restrictions provided by the user (maximum number of lines of codes or workpackages wanted in the calibration scope), and finds the most valuable combination of application features within those constraints. You can choose whether to automatically include workpackages that couldn't be modernized by AWS Blu Age Transformation Engines. These files are challenging, and as such are suitable candidates for a calibration scope, but the number of such artifacts drastically varies from project to project, and it can also be good to make the calibration choice without this constraint.  Calibration reportAfter selecting workpackages, whether manually or automatically with the auto scoping process, a calibration report can be exported. This report can then be used as an import excel within an AWS Blu Insights Codebase to consolidate the different workpackages into a Calibration scope and a Mass modernization scope.  As an outcome, the codebase application has been divided and split into two parent workpackages: Calibration and Mass Modernization. The Calibration workpackage contains as children all Application Features selected by the previous assistant, and the Mass Modernization workpackage all other Application Features. The definition of this Calibration workpackage launches the next steps of the modernization journey with the customer. We can now present this result to the customer, with the previous calibration process validating our choices. This Calibration workpackages will define the test cases for the Calibration phase.We would be happy to hear from you about this new feature. Have a productive day! 

Read more

AWS Blu Insights - VSAM Datasets identification in the dependencies graph

VSAM stands for Virtual Storage Access Method. It is a data set type and an easy, fast, secured data access method. There are four kinds of VSAM. First, the most common one is Key Sequence Data Set (KSDS). It provides random access thanks to indexed data. IMS uses KDSDs. The second kind, Entry Sequence Data Set (ESDS) keeps non-indexed records in sequential order. It is used by IMS, DB2, and z/OS UNIX. Third kind, Relative Record Data Set (RRDS) uses numbered record. This kind is rare. The last one is Linear Data Set (LDS) but it will never appear in modernization projects.  VSAM in Blu InsightsA good understanding of the VSAM design is a prerequisite for a successful modernization. In particular, to know if datasets are a key-point of the project or not and if the client has provided all needed files.   By aggregating information from multiple files, the dependency analysis detects and differentiates VSAM datasets. The CSD file includes dataset declarations and JCL Control Cards specifying the VSAM kind. This kind is determined from properties (KSDS: indexed, ESDS: nonindexed, RRDS: numbered, LDS: linear). With this new feature, the customer can filter the VSAM nodes to find their parents and create the Blusam entities easily. 📊  For example, in CardDemo project, we have detected that 15 out of the 17 found datasets are of type VSAM KSDS. 🎉  The remaining datasets are indirect use of other dataset.    Have a productive day 🚀   

Read more

AWS Blu Insights - Say hello to Application Features

Understanding the boundaries of a codebase is a key feature during the assessment phase to deeply understand the intricacies of each module and the links between them. It is also a key for successful proof of concepts and projects because it is the foundation of proper scoping, sprints definition, test cases capture by the customers, etc.Part of this analysis relies on the concept of entrypoints. The entrypoints serve as the parent elements for the codebase's smallest self-sufficient subparts, enabling us to divide the codebase into smaller, easily testable application units.Each entrypoint represents the beginning of a feature, module, application, batch, or screen scenario. As such, they are a central element in the modernization journey.When the entrypoints have been detected, the following step is to consolidate the codebase’ subparts. In order to visualize these different subparts, we have created the concept of Application Features. These are a standalone set of assets defined by their respective entrypoints, computed from the list of dependencies of the entrypoint and its children. Each of these sets of assets are bounded by the other entrypoints of the codebase. To compute these sets, for each entrypoint, we explore the dependencies. When encountering another entrypoint, we stop the exploration, and all the dependencies that were encountered are consolidated in a set of assets, corresponding to the Application Feature.By visualizing a codebase as a list of linked features, it becomes easier for customers to understand their assets and plan their modernization journey.Application EntrypointsOur goal was to improve the current algorithm for identifying entrypoints, which previously relied solely on the graph's definition: a node is considered an entrypoint if it has no parent and has at least one child. To move one step further and integrate the legacy context into the definition of an entrypoint, the previous menu in the Workpackages layer has been moved to the Dependencies layer, introducing the Application Entrypoints feature. While running this feature, a label is asked to the customer, and this label will be applied to each file / node detected as an application entrypoint through the analysis.The new algorithm makes use of the file type to apply a set of precise rules and provides meaningful results. Handled elements are:JCL jobsCICS transactionsDB2 stored proceduresCOBOL z/OS and PL1 programsCOBOL 400 and RPG 400 / RPG-ILE programs (with a distinction between batch and online)CL programsMNUCMD filesWith this improved feature, customers will better understand their codebase and move with more confidence to the scope definition steps of their modernization journey.Application FeaturesWe introduced the notion of Application Features to describe the subgraph of elements connected to a given entrypoint which don’t collide with another entrypoint. From an AWS Blu Insights point of view, it can easily be modeled as a standard workpackage which will be created and linked to this set of elements. To automate the computation of these Application Features based on a set of entrypoints, we have added a new menu entry to the Workpackages page.When launching an Application Features process, a label must be provided by the customer. This label can result from the entrypoints detection, but can also be tailor made based on inputs from the customer. Each file or node flagged with this label is considered as an entrypoint for the Application Features process.A workpackage must also be provided by the customer. This workpackage will be the parent workpackage of all the workpackages created by the Application Features process.With this new feature, customers will be able to:quickly understand how their applications are structuredhave useful insights on the different features co-existing in their codebasevisualize the frontiers between the different featuresuse these insights in their modernization plan (sprints, test scenarios,...) We will be happy to hear from you about those improvements. Have a productive day!

Read more

AWS Blu Insights accelerates migrations with new AI capabilities

We are excited to announce new capabilities for accelerating AWS Mainframe Modernization with machine learning and generative AI assistance. Using the latest generative AI models in Amazon Bedrock and AWS Machine Learning services like Amazon Translate, AWS Blu Insights makes it simple to automatically generate code and file descriptions, transform code from mainframe languages, and query projects using natural language.Customers can now automatically generate summaries of source code files and snips, making it much easier to understand legacy mainframe applications. If a codebase has comments in languages other than English, with a click on the console, customers can view a translation of the comments into the English language. Blu Insights also makes it much faster to find information within files. Now, customers can filter data in projects using natural language that Blu Insights automatically converts to specific Blu Age queries. Using GenAI, Blu Insights also speeds up common tasks by classifying codebase files that don’t have an extension, converting source files written in languages like Rexx and C, and creating previews of mainframe BMS screens.Finally, new project management features driven by GenAI simplify project management by taking natural language text like “schedule a meeting” and automating the creation of scheduled events to save time and improve collaboration. Customers can now take advantage of automatically generated Activity Summaries and Activity Audits, which includes the actions taken by AI in a Blu Age project for auditing and compliance purposes.To learn more, visit AWS Mainframe Modernization service and documentation pages.

Read more

AWS Blu Insights - Features and functional domains visualization in the dependencies graph

Dependencies feature is the starting gate to visualize and decompose codebases by technical and functional domains. Graphs, which are the fruit of the analysis, accompany customers on their journey of assessment by providing helpful tools such as visual preferences. One way to illustrate the relationships between business domains and visualize components in the graph is by using colored containers to represent nodes with the same Workpackages or Labels.We wanted to go further and enhance this feature by introducing nodes groupings (aka Hypergraph) 🎉. This new capability allows to group/ungroup nodes that belongs to the same workpackage or label and visualize them as single nodes. Nodes that belongs to multiple workpackages/labels will be ignored during the grouping.With this feature, customers will:easily navigate and read condensed graphs as it reduces the number of nodesidentify nodes and programs more rapidlymerge thousands of nodes that share the same Workpackages or Labels (i.e. business domains)represent business domains by nodes that have new shapes (rectangles for labels and triangles for workpackages) and custom colors with aggregated number of files and lines of code.visualize the interactions between the different parts (i.e. business domains) of the applicationunderstand the applications and domains dependencies at a glance.We received a great feedback 🎊 from dozens of AWS Blu Age Certified users that have relied on this feature to speed up the assessment phase. We will be happy 😄 to hear from all of you as well!

Read more

AWS Blu Insights - Assembly Language Analysis

We have improved the support of the Assembly language in AWS Blu Insights! For instance, we added a Cyclomatic Complexity computation and enhanced the Dependencies Analysis for Mainframe codebases for ALC, ASM, MLC, MAC and MACRO file types.Cyclomatic ComplexityThe Cyclomatic Complexity quantifies the number of linearly independent paths through a program’s source code. This metric is important when assessing a modernization project. For more information, see the documentation.      For the Assembly, the calculation is based on branching decision points with extended mnemonic codes and conditional instructions. Branch instructions allow you to specify an extended mnemonic code, like for example JE or BNZ, for the condition on which a branch is to occur. The handled Mnemonic code categories are: used after compare instructionsused after arithmetic instructionsused after testing under mask instructionsbranch relative on condition longbranch relative on conditionjump on condition longExample of branching decision points with extended mnemonic codes:JE label Jump on Equal JNZ label Jump on Not Minus BNZ ERROR Cluster not found or open error BREL label Br Rel Long on Equal BRH label Branch on High JLNE label Jump Long on Not Equal Example of conditional instructions:AIF (T’&T NE T’&F).END Statement 1 AGO .Test Statement 2The decision points that do not affect the Cyclomatic Complexity are: Branch or Jump, which represent unconditional or no operation, such as B, BR, J, NOP, NOPR and JNOP statements.       Result for a project containing Assembly source files, showing the Cyclomatic Complexity per line of codeDependenciesWe added 10 new statements for the dependencies analysis (check the dependencies documentation for more details).In the above screenshot, we can see a project with different assembly files including assembly programs like MLC but also macro files (MAC) with links among them. The type of links are also specified.      Note that the dependencies from other file types (e.g. Cobol) to Assembly are already supported.      We will be happy to hear from you about those improvements.      Have a productive day! 

Read more

AWS Blu Insights is now available in additional regions

We are excited to announce that AWS Blu Insights, the codebase analysis and transformation capability of AWS Mainframe Modernization Automated Refactor with AWS Blu Age, which has been available in the AWS Region Europe (Paris), is now available in 14 additional AWS Regions.All AWS Blu Insights features like Codebase, Versions Manager, Secured Spaces, and Transformation Center continue to be available based on the certification level. Users of the service features will have the flexibility to create their projects in the regions of their choice.With this launch, AWS Blu Insights is now generally available in 15 AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Paris), and South America (São Paulo).For more information, please visit AWS Mainframe Modernization service product and documentation pages, and AWS Blu Insights page and documentation.

Read more

AWS Blu Insights - Say Hello to Blu Age Toolbox

In order to deliver successful modernization projects, AWS Blu Age developed, tested and approved different products ⚒️ based on actual use cases. Those tools have been used on dozens of projects and their efficiency has been validated. In order to simplify their distribution to L3 Certified individuals, we created a new service in AWS Blu Insights called Blu Age Toolbox 🧰. This new service allows to:   see at a glance the lists of the products, their documentations and distribution mode.request access to one or multiple products.follow requests and their status.L3 Certified individuals can request access to these tools directly using bluinsights.aws. They need to fill a form with their project details 📃. Once their request is approved, they will receive installation documentation by email.    For now, the proposed products are:Terminals: offers TN3270, and TN5250 terminals for connection to mainframe or AS400 environments. The Terminals record user interactions with the legacy applications, and generates artifacts that validate the modernized applications.Compare Tool: compares files and database tables to obtain detailed reports to check functional equivalence.Data Migrator: provides data and schema migration for Blu Age modernization projects.The distribution mode of these tools can be:Docker image to pull from an ECR.Binaries 🗃️ of the tool in an S3 Bucket where it can be downloaded.Existing legacy distribution modes (if any) will be removed in the next days.Thank you!

Read more

AWS Blu Insights - System Utilities Suggestion

System utilities is an AWS Blu Insights service in which we list the utilities we encounter on ongoing projects. These utilities are APIs, libraries, packages, product addons, etc. AWS Blu Age tooling leverage this information in the Dependencies analysis to resolve false missing programs and to share details about the integration into the transformation engines and runtime.L2 and L3 certified users can now contribute to enrich our database! Your inputs will be reviewed and approved, within 48 hours, by the service team (if all details allow to proceed).Do not wait to add new utilities with as much details as possible, so we collaboratively enrich our database and improve the quality of the assessment results.I wish you a productive day! 

Read more

AWS Blu Insights - Operational Excellence

Building high-quality software requires rigor and a firm commitment to excellence. Our deep belief in the importance of quality assurance (QA) drives us to implement robust practices to ensure our products meet the highest standards before reaching customers. To maintain the highest bar of quality, we employ diverse mechanisms: a comprehensive and robust workflow, a multitude of tests (unit and end-to-end), meticulous reviews (code, security, and performance), and bug COE (Correction Of Error).Let's look closer at the workflow we adhere to. It is divided into 13 steps which are covering: 🔎 Setup: the feature is defined and specified, potential impacts on performance, security or existing components are identified and architecture choices are described. The most important part at this step is to make sure we are building the right thing following the right way. Scalability and stability of products that do not deliver the expected outcomes are useless.Validations (Setup, Feature, Code and Security): we ensure that the development is consistent with the setup, contains no bugs, performance degradations, or introduced security risks mixing manual and automated mechanisms. The task owner will seek advice and guidance from other team members, fostering knowledge sharing and team building. They have the ownership of the work done and its impacts. During this validation process, modifications may be requested, and thoroughness is highly encouraged since this phase is crucial to ensure no regression or bug is introduced into the product. During code validation, security guardians are involved to ensure that the code complies with AWS security standards.Environment testing: both environments, Canary for preproduction validation, and Flamingo for the production, are continuously tested manually. Canary is additionally automatically scanned and tested daily. We strive to be one step ahead of the client in identifying bugs by catching them before reaching the production environment. We count over 3,000 tests running with Cypress, Playwright and Junit for application, classification and dependencies with daily team reports.Documentation: thorough documentation accompanies all new features. It serves a dual purpose: empowering our customers with the information they need for efficient and transparent usage, and communicating the benefits of new features and their adoption.Workflow from setup to Canary At first sight, this workflow may appear complicated and time-consuming, but this investment pays off daily. Thanks to this commitment to quality, we are continuously improving our software while encountering less than one issue per week among 750+ active accounts.This workflow also allows us to anticipate potential incoming problems or challenges to overcome. For instance, we implemented a dependencies improvement (see Big graphs just got bigger), before dealing with multiple customer tickets about this. This approach also ensures that we deliver reliable Classification and Dependencies analysis, while continually expanding languages and statements support.Fixing issues is a major point which is considered in our workflow. Issues arise from various sources (e.g. users and ticketing systems). To ensure issues are fixed and prevent recurrence, we meticulously describe each issue, identify the scenarios and impacts, and schedule meetings with the involved engineers to discuss the COE. The main points are: What happened? Why? And how to avoid this happening again? Our aim is to identify the root cause, create generic solutions, and reduce the number of similar issues permanently.Identifying issues is a key point in our quest for quality, especially when our aim is to identify and address them before our customers do. To achieve this, we orchestrate monthly BugBash sessions, where the entire team collaborates to “break” the application. We've found this team-building exercise to foster team cohesion while purposefully challenging our product's integrity. All major findings are prioritized and addressed in the following days, if not hours. Operational excellence is not only about issues. It is also about SLA (Service Level Agreements), response times, and availability of the service. By leveraging native AWS Services such as ECS, Fargate, EFS and more, AWS Blu Insights architecture ensures the expected quality. We also continually introduce new mechanisms to reduce the cost of the service infrastructure (see Scaling out/in policies and task protection in practice) without compromising the quality of the service for our customers.Building innovative services for Mainframe Modernization is challenging, with strict requirements from all stakeholders. We are at the beginning of the journey. While we acknowledge the long roadmap ahead, we remain firm in our commitment to provide a service of the utmost quality. A huge thank you to all my colleagues from the service team for their rigor and commitments, and to our active users for their feedback and use cases. Thanks for reading!

Read more

AWS Blu Insights - Scaling out/in policies and task protection in practice

In our ongoing efforts to enhance Blu Insights and uphold the highest standards, we have prioritized the improvement of our scaling system.In our previous scaling system setup, we had to perform scale-in operations manually because the automated solutions lacked a feature to prevent terminating tasks with active workloads. However, with the introduction of ECS task scale-in protection, we now have the opportunity to automate scale-in operations.In this article, we will share our approach to constructing our scaling system, using services such as ECS, Application Auto Scaling, and CloudWatch.An Introduction to scalingScaling involves adjusting resources to maintain optimal performance. We can categorize scaling into two main types: vertical scaling and horizontal scaling.Vertical scaling entails increasing or decreasing the power of your resources. For example, you might add more CPU to your server to boost computational power. We refer the act of enhancing computational power to as ‘scaling up’, while we define reducing computational power as ‘scaling down’.On the other hand, horizontal scaling involves adding or removing instances, such as adding more servers to handle increased user demand. Adding instances is called ‘scaling out’, while we name removing instances as ‘scaling-in’.In this article we will focus on the horizontal scaling.How ECS performs ScalingECS is a service introduced by AWS to simplify running containers on EC2 instances. We can use the Fargate mode to avoid managing the EC2 instances. ECS can be used to run continuous workloads such as a web server, or running workloads such as jobs.To perform scaling operations, ECS primarily relies on two services: CloudWatch and Application Auto Scaling.CloudWatch is a monitoring service that you can use for collecting logs, tracking metrics related to your application, and creating alarms.The Application Auto Scaling service is designed to automate the adjustment of resources, such as EC2 instances and DynamoDB tables, in response to changing application traffic or load. This ensures optimal performance and cost efficiency.When using Application Auto Scaling, you need to define the scaling policy to use. You can choose from three options:Target Scaling Policy: This scales a resource based on a target value for a specific CloudWatch metric.Step Scaling Policy: This scales a resource based on a set of scaling adjustments that vary based on the size of the alarm breach.Scheduled Scaling: This allows you to scale a resource either one time or on a recurring schedule.The diagram below highlights the scaling process using the target tracking scaling policy:In the schema, changes in workload will affect our application’s memory utilization. The target tracking scaling policy defined in Application Auto Scaling ensures that our application maintains memory utilization at the target level of 70% by adding or removing capacity from ECS.How we built the scaling systemOverview of the architectureBefore delving into the methodology we used, I will start by introducing a simplified version of our architecture:In this article, we will focus on the app cluster, as it’s the one for which we built the scaling system. The app cluster consists of a service that spans multiple app tasks. Each task is independent and capable of fulfilling user requests.A load balancer is used to front our ECS cluster, distributing the load across multiple tasks. Additionally, we are utilizing Fargate to avoid managing the underlying EC2 instances.How we built the scaling systemBased on our understanding of the scaling workflow in ECS and the services involved, we used the following methodology to build our scaling system:Step 1: Select MetricsIn this step, we choose the metrics that will drive our scaling decisions. It’s crucial to select metrics that directly affect the performance of our application.We can either use predefined metrics from CloudWatch, such as CPU utilization, or create custom metrics tailored to our application.To help decide which metrics to use, you can rely on the historical for more insights.Step 2: Select Scaling PolicyIn this step, we specify the scaling policies to be employed by Application Auto Scaling. You have the option to choose from three types of policies: Target Tracking Scaling, Step Scaling, and Scheduled Scaling.Step 3: Test the PerformancesIn this step, we recommend conducting two types of tests:Individual performance testing of the application to understand peak performance and application limits.Testing the scaling behavior of the selected scaling policy. This testing provides insights into how quickly your system scales and responds to spikes in load.Additionally, it’s beneficial to run manual tests to gauge the user experience.Finally, we iterate between the second and third steps until we find the appropriate scaling policy for our application.Building the scale out workflowBased on the methodology defined in the previous section, we have selected the following elements to build our scale-out system:After conducting multiple rounds of testing with different scaling policies and parameters, we have reached the following conclusions:The target tracking scaling policy was not suitable for our needs. The scale-out process was slow, which posed a threat to the application’s stability. Also, we lacked control over the number of tasks added.Step scaling proved to be a faster alternative compared to target tracking. It provided us with complete control over the number of tasks we added. With step scaling, we gained complete control over the number of instances, which resulted in better handling of spikes.Below, we outline our scale-in strategy with specific metrics and policies:StrategyMetricsPolicyStep ScalingCPU utilization> 60% add 1 task> 70% add 3 tasksMemory utilization> 60% add 1 task> 70% add 3 tasksRequest count per target> 600 req/min add 1 task> 700 req/min add 3 task Building the scale-in workflowSimilar to our approach for building the scale-out system, here are the elements we selected to build our scale-in system:Please note that we chose to focus solely on the total request count. This decision is based on our performance tests during the scale-out phase and an analysis of historical data, which revealed that CPU and memory were not suitable metrics for scale-in.After conducting multiple rounds of testing with different scaling policies and parameters, we have reached the following conclusions:The target tracking scaling policy was not suitable for our application. It proved to be slow, and we lacked control over the number of instances removed. Additionally, we encountered some 5xx errors.The Step scaling policy, on the other hand, was faster and provided us with complete control over the number of instances to be removed. However, it also encountered 5xx errors.The 5xx errors were the result of terminating tasks while running some workload during scale in events.Below, we outline our scale-in strategy with specific metrics and policies:StrategyMetricsPolicyStep ScalingRequest countBetween 100 and 50 remove 2 tasksBetween 50 and 30 remove 4 tasksBetween 30 and 10 remove 8 tasksBetween 30 and 10 remove 16 tasks Protecting tasks during scale-in eventIn the previous section, we encountered a problem where tasks were terminated while running workloads, leading to the occurrence of 5xx errors.In this section, we will discuss the mechanisms we implemented to address this issue. However, before diving into our solutions, let’s first explore the existing options available for mitigating such behaviors.When examining our architecture, two features introduced by AWS come to mind for handling these situations: Application Load Balancer Deregistration Delay and Task Scale-In Protection.Application Load Balancer DeregistrationThis process allows an Application Load Balancer to complete in-flight requests to instances being removed from service before fully deregistering them. It ensures a smooth transition, preventing sudden service interruptions when instances are taken out of the load balancer.While this solution is effective for applications with light requests, it may have limitations for applications with longer requests. The first limitation is that the deregistration delay is limited to one hour, which is sufficient for most applications. However, the second drawback is that if your instance completes it’s in-flight request within the first 5 minutes, and you’ve set the deregistration delay for an hour, the instance won’t be removed from the target group until the delay expires. This can impact the instance’s lifecycle, leaving the task in a deactivating state until the end of the deregistration delay.For our application, which includes long requests, using the deregistration delay is not suitable. We might have requests that span more than an hour, and setting a high value for the deregistration delay can leave our service with many tasks in the deactivating state, ultimately increasing our billing costs.Task scale-in protectionTask Scale-In Protection is a feature designed to safeguard ECS tasks during critical workloads when a scale-in event occurs. To enable or disable this protection, you can use the SDK.The following example illustrates how our service terminates tasks during a scale-in event without protection, resulting in random task terminations.  The behavior of the scale-in event changes when task protection is enabled. In the diagram below, you can see that certain tasks are protected from termination during a scale-in event. The task scale-in protection is an effective solution to avoid encountering 5xx errors, as it provides us with control over which tasks to terminate and when.Implementing task protection for Blu InsightsTo optimize how we use task scale-in protection in our application, we have introduced the following process to reduce the number of requests to ECS and track the workloads in progress.Enabling Protection for a TaskOur mechanism for enabling task protection for a task takes into consideration the following cases.Case 1: Protecting an unprotected taskSend an API call to ECS to enable task scale-in protection for one hour. Once ECS fulfills the request, the task becomes protected.Set a protection countdown for 50 minutes. This countdown is used to extend task protection.Set an inactivity countdown for 15 seconds. This countdown is used to disable task scale-in protection.Add the workload ID to the “workloadsInProgress” variable. This variable is used to track workloads in progress.Case 2: Tracking workloads for a protected taskReset the inactivity countdown for an additional 15 seconds.Add the workload ID to the “workloadsInProgress” variable.Case 3: Handling workload completionRemove the workload ID from the “workloadsInProgress” variable.Send a response to the client.Disabling Protection for a TaskThe process of disabling task protection is simple, as highlighted in the following diagram:To disable task protection, two conditions must be met: the task has been inactive for 15 seconds, and no workloads are in progress.Send an API call to ECS to disable task scale-in protection. Once ECS fulfills the request, the task becomes unprotected.Task scale-in protection significantly reduced the number of 5xx errors we previously encountered. However, we noticed that there was an edge case leading to a small number of 5xx errors. In the next section, we will introduce this issue and the solution we deployed.Customizing the scale-in processIn our scaling system development, we established suitable scaling strategies and ensured the protection of tasks against termination during scale-in events. However, during further testing, we discovered a rare but critical issue leading to occasional 5xx errors.To understand the issue, let’s examine the scale-in workflow highlighted in the diagram below.Application Auto Scaling initiates a scale-in request, leading to the removal of one instance.ECS begins the termination process for the unprotected task.The task transitions through several states, from ‘Running’ to the last state ‘Stopped’:In the ‘Deactivating’ state, the instance is deregistered from the load balancer, which typically takes between 15 to 40 seconds for the task to complete.If the task receives a long request during this timeframe, the application terminates with a 5xx error. This occurs because the task transitions to the ‘Stopped’ state before it can finish processing the workload in progress.In summary, the issue arises because the task continues to receive requests during its 'Deactivating' state.Solution overviewTo address this problem, we aim to prevent the load balancer from ceasing requests to the task before it reaches the 'Deactivating' state. Here's an overview of our solution: We will configure a CloudWatch alarm to trigger when the request rate drops below 100 requests per minute.Next, we'll utilize the EventBridge service to detect the alarm-triggered event and invoke the Lambda function.The Lambda function will simulate the role of the Application Auto Scaling service. It will determine the tasks to be terminated while respecting the scale-in policy we introduced earlier and deregister them from the application load balancer.Furthermore, the Lambda function will initiate the scale-in operation by making an API call using 'updateService'.ConclusionIn conclusion, building a scaling system may indeed pose its share of challenges, but AWS offers a diverse array of solutions aimed at simplifying the design and implementation of your scaling system.Throughout the course of this article, we’ve taken you through our journey in crafting the scaling system for Blu Insights. We’ve delved into the methodology we adopted, the challenges we confronted, potential solutions we explored, and the successful strategies we embraced for our unique application.While the Application Auto Scaling service is adept at meeting the requirements of the majority of applications, there may be instances where your use case demands a higher degree of customization. In such scenarios, AWS stands ready to empower you with the flexibility to construct a custom scaling system by using other AWS services.

Read more

AWS Blu Insights - Dependencies graphs digest: The Summer Scoop

Welcome to the recap of this summer’s most exciting updates to everyone’s favourite AWS Blu Insights feature: dependencies graphs.Graphs on the Fast TrackSince the end of July, we've achieved a remarkable enhancement in the speed of navigating and adjusting zoom levels for large graphs: a whopping 10-fold improvement. This substantial leap forward was made possible by a straightforward change: ceasing to display tooltips for node and link intersections at higher zoom levels, exemplifying the "Invent and simplify" leadership principle. 🙌Additionally, we diligently pinpointed and eliminated redundant graph rendering calls, resulting in a remarkable reduction of up to 20x fewer render calls in total. 🚀 Reflections on a brief innovation endeavorUnfortunately, not every idea we explore finds its way into our final product. Today, we'd like to provide you with a glimpse into an example of an enhancement that didn't make the cut: the R-tree spatial index for intersections.This particular idea was pursued with the aim of enhancing cursor interactions with nodes and links, as well as improving the computation of nodes inside the selection rectangle and nodes within the viewport, among other functionalities.The prototype implementation was undoubtedly impressive, showcasing remarkable performance. However, it presented a significant challenge in the form of a substantial overhead when managing the movement of a large number of nodes simultaneously. Finding effective means to mitigate this update cost proved to be non-trivial and would have introduced substantial complexity to the product.After careful consideration, we have chosen not to incorporate the R-tree index into our product at this time. We firmly believe the efforts and insights gained from this exploration will not go to waste and will inform our future innovations and continue to pay dividends in our ongoing quest for product excellence. 💪Organic layout just got smarterIf you’ve worked with graphs on AWS Blu Insights, then you've likely encountered the need for an organic layout, especially after rearranging nodes positions or extracting subgraphs. Due to browser resource constraints, the existing organic layout had limitations.As of September, we've introduced a powerful new addition to your toolkit: Smart Organic Layout. 🎉This is the same robust layout algorithm previously reserved for generated graphs, and it's now seamlessly integrated into the graph's user interface. You can apply it to any selection of files and nodes, allowing you to extract visual insights effortlessly.Give it a try today, and see your subgraphs come to life like never before. 🌱Graph shearing: The pre-fetch filter to the rescueFor many of our large projects, complex graphs play a pivotal role. However, they can present loading challenges on certain machines. We've heard your concerns, and we've taken them seriously.As of late August, we're excited to introduce a game-changing feature: Pre-fetch filter pop-up. 🎉This feature is designed with one clear objective: to empower you to work seamlessly on projects with large graphs, regardless of your machine's power.The pre-fetch filter pop-up is an intelligent feature that steps in precisely when you need it. If our system detects that the graph you're about to load is substantial in size, it will automatically display the pop-up. 🦾In the pop-up, simply handpick the node types you wish to include in your graph, leaving behind what you don't need. Only nodes matching your chosen types will be loaded. Files, on the other hand, will all be loaded by default.Once you're satisfied with your selection, simply click on “load graph”. Your graph, now streamlined and tailored to your needs, will load seamlessly and can be manipulated and interacted with just like any other graph.For additional details on the pre-fetch filter pop-up, please consult the accompanying documentation.The Graph Guru's SecretNaturally, we wouldn't dare wrapping up this article without offering you a valuable gift: A practical power-user tip you can start implementing today. 🚀For your most ambitious projects, consider exporting your dependencies and re-importing them into a clean, file-free project. This simple yet powerful technique not only allows you to load larger graphs but also significantly accelerates the loading times of your existing ones.Still skeptical? Recall the graph we unveiled last month, boasting an impressive 1.5 million nodes and 5 million edges. It effortlessly came to life through this very approach.In fact, we've found this particular trick to be so immensely helpful that we’re making it into a dedicated service 🤫 Give it a try today, your largest graphs will thank you!The collaboration tale Cont'dAs we bid farewell to the warm days of summer, we eagerly look forward to the challenges and opportunities that the future holds. We remain dedicated to our mission of continually improving our app and providing you with the tools you need to thrive in your projects. Your feedback has been invaluable in shaping these enhancements, and we encourage you to keep sharing your thoughts and ideas with us. Together, we'll continue to make AWS Blu Insights an indispensable asset for your work. 🤝Thank you for being a part of our journey. Keep your seatbelts fastened for an exhilarating ride ahead!

Read more

AWS Blu Insights - New ways unlocked in the terminals

Capture & Replay 🎬 is never ceasing to improve through iterative work. This is mainly maintained by the feedback from the customers explaining their needs and ongoing experience. Offering a better user experience is always our motto.As a result of the above, new ways are unlocked in the terminals. It now offers more freedom in setting the session's options ⚒️, a new replay target, and an easier setup experience.In the new replay ▶️ target, the recorded tests artifacts on the TN3270 and TN5250 terminals for both Selenese-runner and Playwright can be replayed in the itself terminal (i.e. the source and target applications are based on TN3270 or TN5250). The application can run in headless mode, and be integrated in a CI/CD pipeline. This is very useful for legacy applications replatformed on the AWS cloud. In addition, easier setup experience is offered by introducing logs of useful information like establishing the connection with the legacy server, and debugging the data exchange between the server and the terminal.And lastly, as we mentioned earlier, the configuration of the connection now includes:Workstation type/emulation type where different screen modes 📺 can be set, like monochrome or color graphics display, sizes also differ between them.New connection type TLS v1.2 🔐 for both terminals where the user provides the Certificate or Keystore, and the legacy server uses them for client authentication. This is crucial to ensure better security and to earn the trust of customers.As a reminder, you can request it from Capture & Replay service in bluinsights.aws 🚀 by filling a form with your project details.Have a great day!

Read more

AWS Blu Insights - Access to 3270 and 5250 terminals made easier!

During modernization projects, it is crucial to assert functional equivalence between legacy (Mainframe/AS400) and modernized applications. One of our team’s tools, the Capture & Replay 🎬 is made just for that. The terminals are typically installed in the AWS customer environment, where a connection with the legacy application can be established. This also allows the customers to customize their environment at will and specify exactly what permissions they want to grant to our application. Users are also free to choose how they want to set up a connection between this environment and their legacy application. Security is often of the utmost concern during modernization projects, letting customers keep control over these factors help us earn their trust.Previously, the distribution of the terminals was manual: teams wanting to use our tool needed to contact us so we could give them access to the Docker image of the application as well as provide guidance for the installation.As the demand for this tool is increasing, we decided to automate and simplify the process and allow L3 certified individuals to request access to the tool directly from bluinsights.aws 🚀In the Capture & Replay service, we added a new form customers can fill out with some details on the project they are working on. Once their request is validated, their provided AWS Account will be able to pull the application’s image from our ECR repository and they will automatically receive a guide with details on the installation.Automating this process allows us to simplify and standardize UX for our customers and concentrate our time on developing new features to better serve them!Earning the trust of our customers is one of the core tenets of our team. This is why our new system also integrates the application with AWS Key Management System (KMS) 🔑, ensuring at all time that only customers having been granted access to the application are using it.Have a great day! 

Read more

AWS Blu Insights - Functional equivalence for modernized Web apps.

We have already presented the Capture & Replay service in previous posts (here and here). In short, it is a native Web implementation of the 3270 and 5250 protocols that allows to connect to zOS and AS400 online applications, record all user interactions and screen content in order to generate a test script that can be automatically launched on the modernized application. The unique target testing framework was Selenium. This test allows to check and validate the functional equivalence of the modernized application compared to the legacy one.   Working backward with our customers (delivering mainframe modernization projects) and in order to offer alternatives to a slow and risky Selenium project (too many bugs since the last version with almost no fixes), we decided to support other frameworks.   We are happy to announce the support of Playwright scripts generation in TypeScript and JavaScript. The extensibility of this framework allows us to provide basic scripts (made of dozens of lines of code) that will leverage the JSON file (containing all the interactions and fields of the recording). This separation of the script and the content of the test allows a high flexibility, simplifies debugging and speeds up the migration to other testing frameworks, if needed. As a simple code snippet, developers can use the same environment to modernize and test without the need to install specific environments like Selenium. In addition, SCM tools can be leveraged for the scripts to streamline the integration into a CI/CD pipeline leveraging the official Playwright Docker image.   Last but not least, we are intensive users of Playwright to run end-to-end tests on AWS Blu Insights. We recommend it!   Now, go build!      

Read more

AWS Blu Insights - Self-Service Classification

Classification is a key feature in AWS Blu Insights. Based on files’ contents, it recognizes their types among +30 languages based on +360 known statements. And the list grows permanently based on customers’ requests. Although we can handle most of them in a couple of days, we identified a simple and user-friendly way to inject efficiency in the process by letting users handle their specific needs (i.e. classify the remaining Unknowns) directly in Blu Insights leveraging existing features.         Most times, customers can identify the type of “Unknown” files (usually based on their content using their own patterns) but they struggle to apply changes directly in Blu Insights.         To fill this gap, we simply added the Manage Types feature in Workspace! Yes, it is as simple as that. 🎉         Concretely, customers can accomplish the classification of “Unknown” files in the following steps: Open the Workspace tab, choose Search item and select ‘Use Regular Expression’Type the pattern in the Search input and check the results (iterate if needed).Select the result files and click on Manage Types.Finally, choose the target type (or create it if it does not exist) and click on Transform Types: NB: This video is not an actual scenario, it is a simple illustration.     

Read more

AWS Blu Insights - Ability to add links between To-Do cards

Hi all, In the previous version of AWS Blu Insights, To-Do cards can be linked to documents, uploads and external links. In the latest available version, we also added links to other cards. This is useful especially to create relationships between cards belonging to the same topic and create your own workflow. To link a card, simply click on it, go to the attachment section, and click on the "Cards" button. Then, you can select the card(s) to link. We hope this new feature will be helpful in managing your cards and enhance your experience with a smoother and more productive workflow. Have a great day!  

Read more

AWS Blu Insights - Dependencies guided-enrichment using cross references

The AWS Blu Insights dependencies analysis feature handles dozens of programming languages, cumulating hundreds of statements (~250) that trigger links between programs, files, objects, and more. The results obtained from the dependencies engines are based on official documentations (when available) and on concrete use cases (refactoring projects) with iterations on specific cases and new findings. Over the years, we have significantly improved the outcomes, so that a high percentage of the dependencies are automatically detected with no need for a Codebase deep dive. However, supporting all the languages and all the statements within their respective options is a tremendous task as we are continuously surprised by the imagination of the developers of those legacy systems in missing and using all the documented and non documented options of known and unknown languages. For this reason, we designed the dependencies analysis to allow you to benefit from the results without being blocked by missing statements or specific cases. AWS Blu Insights computes all the known statements which are documented by language and type (Cobol example) and offers a set of features to let you iterate over the results to meet the expected graph (e.g. Manage Types, Manage Extensions, Import/Export JSON). It is also possible to use Workspace to look for a specific statement (e.g. using regular expressions). This is usually combined with extra (meta) information shared by the customer and imported as labels or workpackages. Furthermore, users can download the dependencies as a JSON file and re-upload it after editing using a text editor or any program written in your favorite language. Lately, we also introduced graph operations to speed up the editing by adding and deleting nodes and edges directly into the graph using an intuitive UI. Combined, these features allow to make progress on the assessment without waiting for the product team to handle all the findings (although we heavily recommend sharing them to continuously improve the engines). With this work, we wanted to go a step further and recommend a list of potential missing links you may add to your graph within a few clicks. These links are “potential” as their detection does not rely on a specific rule based on the programming language, but on a generic finder looking for cross-references in files independently from their types. How does it work?When users request a specific dependencies analysis (e.g. Mainframe), AWS Blu Insights triggers the cross-reference scanning in parallel (no impact on performance as it runs in its own container and only takes a few minutes). Once the results are available, the extra links not detected by the dedicated analysis are proposed. You can analyze these potential missing links to explore missing nodes, isolated nodes, etc.One main issue resulting from this approach is the amount of false positives. To address this issue, we filter out the dependencies only found in comment lines for COBOL and JCL. And we remove dependencies that only differ in the link direction between the two analyses.You can start using this feature through the new option “Show more links” to display the list of potential missing links, their details (source, target, line of detection), preview the code, and more. Note: Existing projects will need re-launching the dependencies analysis to enable the feature.This option can be mainly used in two ways: Selecting a group of nodes and looking for potential missing dependencies for them.Selecting all the nodes (prefer doing this in subgraphs if the graph is large) and looking for all the potential parents of one node.Show casesIn this first case, we want to know what are the potential dependencies of CBACT04C.cbl, to do so, we select the file and click on “Show more links”, then we cherry-pick from the selection of potential links, and finally save.In this second case, we want to find all the potential parents of CARDFILE.jcl. So we select all the nodes, click on “Show more links”, and then filter on all matches for CARDFILE.jcl. We can add potential missing links the same way we have done it in case one.We have observed excellent results, especially for languages which are not supported by the dedicated analysis (e.g. Shell). We would like to hear from you and get your feedback and suggestions.Now, go build! 

Read more

AWS Blu Insights - Visualization and commit of TC outputs

The Transformation Center (TC) service outcomes are the modernized source code files. Within a few clicks and almost no configuration, users get their “outputs” ready for download as an archive. Once unzipped, users upload this code into their IDEs and start the testing and debugging to make sure the code executes as expected. The process is iterative and multiple executions may be required, which will generate multiple downloads and files versioning (manual or using SCM tools). When dealing with an entire project, the number of files is always high (not to mention all the scripts, configuration files and intermediate files used for debug). Sometimes, users need to check a fix or a refactoring option in one specific file. The workflow may rapidly become overwhelming and lack efficiency. In order to tackle this need and remove friction, we introduced 2 new options to manipulate the outputs (besides download): 👀 Visualize in WorkspaceWorkspace is an IDE-like environment which is already available in Blu Insights in Codebase projects. It offers a set of features similar to what you can find in a modern IDE like VS Code. We make it also available in TC projects to let you browse the outputs and make benefits of all its features.🚀 Commit to CodeCommitYou get it. 🎉🎉🎉 Combined with visualization, you can now push what matters to your CodeCommit repository (and branch) of your choice thanks to a dedicated Booster, in order to streamline the workflow from code transformation, to code test and debug. The creation of the repository, branches, security configuration, etc. remains on CodeCommit. Blu Insights will simply connect to the repository and push the outputs upon your request. To sum up, you have now 3 options available at the selection of a TC Run: Download, Open in Workspace and Push to CodeCommit. You can find more details of this feature in the User Guide and FAQ. If you have other needs or have ideas to boost the productivity and inject efficiency, we are always happy to hear from you. Have a productive day! 😊The AWS Blu Insights Team. 

Read more

AWS Blu Insights - Transformation Center usage billing

AWS Blu Insights, accessible at https://bluinsights.aws/, offers a set of services that cover different needs for legacy source code modernization projects. It is part of the AWS Blu Age Refactoring solution of Mainframe Modernization. Access to those services is based on the certification level. For the highest certification level (L3), users will have access to 10 services as depicted in the screenshot including Codebase, Versions Manager, To-Dos, Transformation Center, Secured Spaces, Capture & Replay, Business, Time Tracker, System Utilities, Library / Training. Those services are accessible for free except the Transformation Center (TC) which is subject to billing (See pricing page although it has not been update yet). The Transformation Center is the service that allows to leverage the AWS Blu Age transformation engines in automated and user-friendly workflow to transform the code (e.g. Cobol, RPG...) and get the modernized source code (e.g. Java, Angular, Groovy...).A few weeks ago, we announced the release of the billing mechanisms of the Transformation Center. Working backward with our customers and partners, we identified critical adjustments of the pricing model. We refined it and updated the mechanisms following this simplified pattern:At the creation of the TC project, AWS Blu Insights automatically calculates: the total number of lines of code candidate for transformation.the number of installments (i.e. to avoid upfront billing and trigger billing in parallel to project progress).Users benefit from a free tier of 120 000 lines of code. Once consumed, the billing will occur based on installments.For more details, please refer to the documentation and FAQ. The AWS Blu Insights team

Read more

AWS Blu Insights - Dependencies and Classification engines

The classification and dependencies analysis are key features in AWS Blu Insights. They allow to rapidly dive deep into the codebase and start the assessment within a few minutes independently of the size of the codebase. In this short post, we wanted to share with you some metrics about those features.  Metrics# of launches per monthDuring the last 12 months, AWS Blu Insights handled an average of 363 dependencies and 252 classifications analysis per month, with a success rate over 99%. 🚀🚀Classification analysis MonthFailed analysisRequested analysisSuccess rate2022-060160100%2022-07620597.07%2022-08126299.62%2022-09116999.41%2022-100174100%2022-11118099.44%2022-120273100%2023-01131899.69%2023-02335799.16%2023-03131199.68%2023-04122899.56%2023-05440497.03%2023-060238100%Total19327999.42%Dependencies analysis MonthFailed analysisRequested analysisSuccess rate2022-06422098.18%2022-07424298.35%2022-080337100%2022-09523697.88%2022-100281100%2022-11129899.66%2022-121234996.56%2023-01663499.05%2023-02535498.59%2023-03246099.57%2023-04431998.75%2023-05159099.83%2023-06240099.50%Total46472099.03%# of supported languages and statementsThe dependencies engine supports 26 programming languages for almost 250 statements, including 120 for mainframe and AS400. We documented all supported statements (including dynamic calls).The classification engine handles 33 programming languages (along with empty and binary files) for a total of 345 statements.# of tests As of today, we maintain 2,370 tests performed on 28 reference projects. Typically, we commit new changes only if all the tests are successful: 2,000 tests for the dependencies.364 tests for the classification.6 performance tests.# of reviews and validationsThe high success rate presented above is the outcome of a rigorous work methodology and set of mechanisms put in place. Based on new requests raised by customers or identified by the team, we deep dive into the documentation (if any), we check for as much examples as possible, we setup the feature (describe how it will be supported), we write unit tests, etc. The entire process is made of almost 9 distinct steps, which include 5 mandatory reviews made by other team members. ✅ Yes, but we are not done...Although we cover a lot of languages and statements, we continuously discover new languages and statements. In most cases, we can handle them within a few days. We designed the engines to allow incremental extensions and the support of new findings. In addition, we highly encourage users to move forward with the existing features that allow to adapt and fine tune the obtained results. For example: They can leverage Manage Types combined to BQL or Workspace to identify some other types.They can add/remove links in their graph using Graph Operations.They can download the dependencies result as a JSON file and rework it.They can leverage Show More Links to identify/add missing links.We are also working on other features to help you move forward while we encourage you to create tickets and let us know whenever you identify an improvement, we could add to our engines. 🤝Now, go build! 

Read more

AWS Blu Insights - Jump-start your Blu Age projects with BI Builder

AWS Blu Insights offers a set of services that we deliberately built as independent building blocks. For example, when users create a Codebase project, a component called Generic Analysis will be in charge of gathering basic metrics (lines of code, effective lines, comment lines, total number of lines...).Based on the uploaded files properties, other components like Classification will be proposed. Besides, users will have then the possibility to launch the Dependencies component and so on and so forth. Doing so, users see the initial codebase and then incrementally see the generated insights upon requests.This is important to understand the workflow and the generated results, especially during the assessment phase. However, when you have been trained or have been using Blu Insights for months or years, the process becomes mechanical 🥱 with almost no added value and without thinking about the sequence for a second. In addition, as the components become more and more mature, the quality of the insights is high and manual fine-tuning is rarely required.Do you know how many clicks you perform to get your first Transformation Run? A lot, we know 😳! 25 is the right answer: b to get a ready-for-assessment Codebase project and 11 clicks to get the outputs of your first Transformation Center Run. We heard users say “Hey BI team, make those clicks for me and let me focus on things that matter more.” and we did it.We are pleased to announce the release of Blu Insights Builder 🥳 a new wizard to let you go from archive upload to outputs download in 1 unique and basic step ⚡.Drop an archive to Blu Insights and let it do the job. It will automatically perform the following actions without any required manual intervention:Create the Codebase projectLaunch the ClassificationLaunch the Cyclomatic ComplexityLaunch the DependenciesCreate a Transformation Center project (if the user’s role allows them to)Launch a Run with all inputs and the latest Velocity version (only to get the weather report - no outputs - no billing) Does it mean that you will always click once and get all done? The answer is “Yes, in most cases!”. But keep in mind that you can iterate over the results using the different features. For example, Manage Types, Manage Extensions, Graph Operations, etc. to get the expected results that fit your needs and requirements.We give you back dozens of minutes per project. You are welcome. Enjoy 😊 The AWS Blu Insights team 

Read more

AWS Blu Insights - Supercharge your graphs with custom artefacts

The most efficient way to deep dive into a codebase is to start from its dependencies graph. This latter contains insights that allow within a few clicks and filters to know a lot about the applications without reading the code or any documentation (if exists) 🚀. The process is iterative and the findings are important both for the assessment (identify missing, obsolete code, duplications, complexity, common libraries and data, highly connected programs, etc. See more in Dependencies) and for the transformation as we need to get transformed the program within its dependencies (see details in Transformation Center). Computing and displaying the dependencies graph is one of the most used feature in AWS Blu Insights. The generated graphs contain different artefacts: Vertices ⚪: represented as circles with different colors based on their types. They can refer toFiles: actual files in the codebase.Nodes: virtual objects like System Utilities, missing programs, database objects, etc.Edges 🔗: represented by links between the vertices and having different types based on the dependencies (e.g. Call, EXEC CICS, Copy...).The graph may also display meta information like labels and workpackages. Users can customize all these artefacts by editing their types and colors. They can also hide and show nodes in their subgraphs. For expert users, it is even possible to download the results as a JSON file, rework it and upload it again. Reworking a JSON file can be done using an IDE, a text editor, or a program written in any programming language. Usually the goal is to add or remove some vertices and/or edges to get the graph that most fits the customer's needs or the project methodology (e.g. handling the data and code management in different steps). Many of our customers asked to inject more automation in the process and reduce the friction, i.e. do most of the rework directly in Blu Insights and in a simplified way. So we did it 🎉. You can now add, customize and remove edges and vertices within a few clicks.Adding a node is as simple as specifying a name, a color, a type (existing or new) and a description.A new link has a type (existing or new), a source node and a target node.Try it and let us know. We have a long list of improvements we want to add to this feature. They will be available in the next versions. Stay tuned! 😊The AWS Blu Insights team

Read more

From S3 to AWS Blu Insights

Creating Codebase projects is as easy as uploading (from local machines or from Secured Spaces) an archive with the artefacts to be analyzed and transformed. Many of you asked us for other options to let our customers have more flexibility leveraging other existing AWS services. As usual, we deep dive into this feedback, considered all the aspects (e.g. security, user-friendliness, benefits, effort...), prototyped a solution, iterated internally, challenged the results and submitted it for a thorough review.Today, we are happy to announce the availability of a new option to create Codebase projects from zip/7z files hosted in S3. To do so, it is as easy as creating an S3 presigned URL for your archive hosted in S3 and entering it into Blu insights: On S3Go to your S3 bucketSelect your archive (Zip or 7z)Click on “Actions”Click on “Share with a presigned URL”Specify the "Time interval until the presigned URL expires"Click on "Create presigned URL"On Blu InsightsEnter CodebaseClick on “+ New”Choose “S3 Bucket presigned URL”Paste the presigned URLHit “Create a new project” button. That's all! 🎉Blu Insights will upload the archive from S3 to its own infrastructure (similar to existing uploads) and proceed with the creation of the project. You can check the details in the documentation and FAQ. As usual, your feedback is valuable. Please reach out if you have questions and/or suggestions 🙏. The AWS Blu Insights team 

Read more

Automatic constraints-based subgraphs extraction

Can you guess how many nodes and edges does this graph contain? The exact answer is 1.5 million nodes and 2 million edges! That’s a lot of artefacts, and it’s not even the largest codebase we’ve seen in Blu Insights!As of today, the record is held by a codebase with over 2 million nodes and 8 million edges. Customers are often amazed when they see it. While they’re aware of the size of their portfolio, most have never measured and visualized it.Each of these artefacts has a set of properties, including its type, location (library), homonyms, business domain, feature, number of lines of code, parents, children, etc. All of these details need to be considered during the assessment phase.Blu Insights offers a user-friendly filtering system based on BQL. It allows combining all of these properties and iteratively assess the codebase, breaking it down into smaller subgraphs based on the project’s modernization requirements (e.g. applications isolation, features, business domains, common modules, POC scope, etc.). Other features, like subgraph and group labeling, layout customization, and more, also help you dive deeper into large projects like these.Users typically leverage all these tools to split this monster graph into smaller, manageable subgraphs. This exercise requires expertise, manual iterations, and patience.Over the past few months, we’ve observed a panel of Blu Insights users with different profiles, including SDEs, business developers, SDMs, and more, with varying levels of expertise and time spent on modernization projects. We asked them for demos on concrete projects and asked them to identify the most repetitive and complex tasks. We also analyzed the most frequent questions we receive from trainees and new joiners.Finally, we combined all of these insights and imagined how we can address those needs by leveraging automation to improve operational efficiency in the assessment phase.Different features have already been released, requiring a re-architecture of the underlying dependencies management module. Among those features, we can mention “Graph Operations”, “Explore more links”, and “BI Builder” (more details will be provided in the upcoming days although the documentation is already up-to-date).Today, we are excited to announce yet another new feature: Automatic constraints-based subgraphs extraction. 🎉This feature enables you to automatically extract subgraphs by only specifying certain criteria. This tremendously simplifies the tedious process of manually selecting files and nodes when splitting a large graph.To get started, simply select the files and nodes to extract from. Existing filtering techniques can be leveraged for that. For instance, functional labels filters can enable isolation of business domains and functional features. The resulting subgraphs will, by design, subscribe to a functional dimension.Then click on the “Generate subgraphs” option in the menu at the bottom and follow the configuration steps. For the purpose of this example, we keep the defaults for the first and second step.On the constraints step, we specify our requirement to extract subgraphs that don’t exceed 6k files and 2 million effective lines of code. For perspective, this graph has over 250k files and 50 millions effective LOCs.Under the hood, when the “Generate” button is clicked, over 50 subgraphs will be generated, scored with an internal scoring algorithm, then ranked based on their score. Finally, only the top 5 will be picked and proposed.The scoring algorithm uses internal heuristics based on subgraphs metrics like size. We also expose a window for tweaking it by specifying your own ranking criteria based on file type distribution.Once the subgraphs are extracted, they can be found in the group created especially for the occasion, and can be navigated and updated like any other manually extracted subgraph.Cherry on top, the files and nodes are not picked arbitrarily: no file or node is included on the subgraphs without including all its dependencies (both direct and transitive). This enables the scope of the subgraph to be self-sufficient, i.e. no other file or node should be needed from the original graph.As a bonus, notice that the subgraphs generation is also available directly on subgraphs. This means that it’s possible to iterate on the generated subgraphs and split them further until the functional and technical expectations are met.Please note that, at the time of posting this article, the following feature was in beta testing phase, which means it may have undergone significant changes since then.—The Blu Insights team

Read more

Impacts analysis in Versions Manager

During the lifespan of modernization projects, the legacy application continues receiving new features, fixes and improvements. There are modifications to the codebase that have to be integrated in the modernization process. This integration is commonly called “Code Refresh”.Code changes have impacts on the ongoing modernization (test cases, test coverage, decomposition, etc.) that need to be tracked and identified in order to guarantee the functional equivalence of the modernized application and the legacy application running in the production environment.AWS Blu Insights integrates a service called Versions Manager which allows users to automatically compare 2 codebase projects (reference and refreshed) and identify: added files, deleted files and modified files. It is now getting an additional module called Impacts! This new module lets you see at a glance all workpackages, test scenarios, labels, statuses and team members that are impacted by the code refresh.The Impacts module simply goes through all the artefacts (workpackages, test scenarios, statuses, labels and team members) of the reference project, and checks if they are linked to a file that is modified or deleted in the refreshed project. Each artefact category has its own tab and lists them along with the files impacting them.For example, in the screenshot, we can see that the workpackage “Veery” has 2 impacts related to 2 files:* CVTRA02Y.CPY has been deleted from the refreshed project.* COACTUPC.CBL has been modified in the refreshed code. It can be viewed and compared among the two projects to see the actual differences.In actual projects totalizing thousands of source code files and millions of lines of code, this automated impact analysis allows to inject efficiency 🙌 in the process as delivery teams can get back to customers within hours to discuss the changes and define the new project scope (e.g. to update estimates and test scenarios). In addition, Versions Manager projects are collaborative, i.e. all stakeholders can be invited to resonate on the findings avoiding emails, excel exports and files sharing.This new version embeds also a set of small improvements and fixes (e.g. a new invitations system for the versions manager). You may check the documentation for more details.This new feature has been designed and built with the help of the Blu Age delivery teams to stress it along actual projects. Thanks to all the stakeholders for their time and help. We have also prepared a list of improvements to come in the next versions. A true illustration of team work to meet customers expectations.Have a great day!The AWS Blu Insights team.

Read more

Notes Board

Rémi, an AWS Blu Insights user, reached out to the team and shared this feedback:Dear Blu Insights team, While I was using the To-Do feature, it felt to me like it was missing a space where I could write general board notes (not just card notes by writing in the ‘Comments’ section of a card). These board notes would supplement my to-do cards in keeping track of my project progress, and help me list the goals (and roadmap) of the board. I have attached a picture of what these board notes could look like.For this request, Rémi answered the 3 questions we address while working on any new idea:What? A space to take notes accessible from my To-Dos board.Why? I need to take notes independently from my cards.How? A simple board on the right side of my board.Rémi did all the setup explaining the requirement, the actual need and a potential solution. Usually we challenge the “How” but not this time as it perfectly fits in Blu Insights.The feature is now available in AWS Blu Insights and we already received positive feedback from other users. We have also a list of potential improvements.Enjoy and kudos to Rémi!

Read more

Information Management System – Classification and Dependencies

Dear Blu Insighters,As you already knew, AWS Blu Insights is constantly evolving and fulfilling the customer needs. Adding another feather in our cap, we are happy to say that Blu Insights is now supporting the IMS (Information Management System) in ‘Classification’ and ‘Dependencies’ analysis.The classification analysis can recognise the PSB (Program Specification Blocks), DBD (Database Description) and MFS (Message Format Service) file type of IMS. Also, the dependency analysis can establish the dependency between PSB and DBD.Raw IMS files before classificationFiles type after classification processDependency relationships Refer Blu Insights doc for all types of extension supported by classification analysis and IMS doc for statements responsible for creating dependencies. Now, it’s time for you to dive deep and provide more insights to the customers.

Read more

Transformation Center – Velocity version

Hi all, In order to enhance the user experience of the Transformation Center, we delivered an improvement to help you rapidly identify the previously used Velocity version (Current) and the newest available version (Latest) both for Official and Nightly builds. Thank you.

Read more

Blu Insights Query Language (BQL)

Hi all, Filtering capabilities in Blu Insights are one of the most used and appreciated features. The Blu Insights Query Language (BQL) is available in almost all the filters except a very few. We heard from you it was difficult to know where BQL is supported and where it is not. Good news, we made it simple for you to know at a glance… Simply look at the icon at the left of the filter and you will get it. Don’t worry, there are no changes to current BQL filters’ behavior. This is a cosmetic update. A small update with a powerful impact on the user-experience. Have a great day!

Read more

Capture & Replay – Working backward to deliver a better product

Hi all,During the last weeks, the team iteratively worked on the Capture & Replay service, especially the TN5250 terminal for the ongoing AWS Blu Age modernization projects (+400 test scenarios recorded and being integrated into the CI toolchain).We receive feedback from end-users covering different aspects: missing interactions, generated outputs, test scripts’ length, user-friendliness, etc. The exercise is tricky as it involves multiple stakeholders (testers, developers, business users, AS400 operators…) with various perspectives and needs. As a result, the outcome is a far better product. For instance, the latest version released last week embeds the following improvements:Reduction of the SIDE file size by 20% (for a faster loading).Integration of a recommendation and alert system for test duration (for a better integration).Generation of Selenese-runner compliant files (for faster execution).Improvement of the readability of the Side files (for a better formatting).Integration of the timestamps for the commands (for easier debug).We also built a standalone version of the terminals available in preview. This new application brings exactly the same features, but it is available outside of AWS Blu Insights. If you are interested in beta tests, please let us know.Thank you.

Read more

Custom Profile

Bonjour, The Single Sign-On comes with a bunch of new possibilities, i.e. create a brand new Blu Insights environment per customer or per project using different AWS accounts. To let you identify at a glance in which environment you are working, we introduced a nice new widget to customize your profile. How does it work? Simply insert a description in your profile and your avatar will be customized (see this one proposed by a Blu Insights lover). Can’t wait to see yours. 😊 Have a great day!

Read more

Transformation Center – One click to build your team #2

Dear Blu Insighters, Working backward is in the DNA of our team. We heard from you that team members’ invitations in Transformation Center projects were tedious and time consuming (one team member at once, have to do it first in the Codebase project, etc.). We simplified the process so you can do it within 1 click, all in your TC project (even for new members not part of the reference Codebase project). We give you back more time to delight your customers! Have a nice day.

Read more

Transformation Center – One click to build your team

Dear Blu Insighters, Working backward is in the DNA of our team. We heard from you that team members’ invitations in Transformation Center projects were tedious and time consuming (one team member at once, have to do it first in the Codebase project, etc.). We simplified the process so you can do it within 1 click, all in your TC project (even for new members not part of the reference Codebase project). We give you back more time to delight your customers! Have a nice day.

Read more

[Video] Capture & Replay illustrated on CardDemo

Dear Blu Insighters,Making sure the modernized application behaves exactly as the legacy one (aka functional equivalence) is not an option for successful modernization projects. Testing is difficult and requires a holistic approach combining initial and target conditions with given datasets, user interfaces, test cases, etc. A very simplistic description would be: we need to capture (or record) the test scenarios on the legacy application and replay them on the modernized one.This is exactly what Capture & Replay service in AWS Blu Insights does for screen (online) testing leveraging built-in TN3270 and TN5250 terminals.In a simplified workflow:Users of the legacy application (business owners) use the terminal to play a scenarioBlu Insights will capture all the screen inputs and user actionsThe outputs area video of all the scenarioa JSON file containing all the extracted information from the screens and time-stamped users’ actionsa Selenium script that can be run on the modernized applicationThe Selenium script can be adapted if needed using the Selenium IDE.In order to illustrate the service, we modernized CardDemo using AWS Blu Age (this is part of the accreditation program) and recorded a test scenario on the legacy application running on ENSONO.This video shows the entire process: For more information, read our documentation.Thank you.

Read more

AWS Blu Insights – From the weather report to the Issues view

Dear Blu Insighters,A few weeks ago, we introduced the weather report in the Transformation Center service. For each Run, a report is generated with a bunch of details to give you an overview of the efforts needed from all involved stakeholders (i.e. project and product teams) to modernize the related codebase.Most of the data of that report is now available in a new view (in Velocity > Issues). This view lists all the issues organized in three levels based on their language, type, and summary.For more information, read the documentation.Thanks

Read more

The Blue banner announces a better version

Dear Blu Insighters,You have probably seen a new blue banner that appears on top of Blu Insights to announce scheduled maintenance operations. We want to keep pushing new features and improvements at a high pace while raising the bar for security and operations. You may see it often, but no worries. We are not patching; we are delivering your requests and much more. Please read the banner message carefully and acknowledge that it does not impact a customer demo or meetings. If it is the case, just let us know and we will do our best to reschedule the operations. Although we shorten those periods and schedule them at the most convenient time for all the users, AWS Blu Insights will be unavailable.Have a nice day!

Read more

My account gets disabled again and again

Dear Blu Insighters,Security is a top priority in the design of each feature in AWS Blu Insights. Many of you get their accounts disabled for inactivity (no login for over 30 days, besides AWS Blu Insights emails 5 days before it actually disables the accounts).We won’t let you wait anymore… You will enable your account (ONLY if disabled for inactivity) and get into Blu Insights within a few clicks. All what you have is to read the error/warning message and follow the steps.No more excuses 😊Have a nice day.

Read more

Download permissions on AWS Blu Insights

Hi Blu Insighters,As you know, security is a key concern for our customers, and AWS Blu Insights is built accordingly. In this short post, I explain how users have full control of download permissions. We’ll depict how this works for Secured Spaces, Codebase, Versions Manager and Transformation Centers.Secured SpacesThe owner of the Secured Space determines if users can download its content or not through the “Download” authorization.CodebaseThe owner of the project can manage, through user profiles on the “People” page, the “Download source code”, “Export reports” and "Download attachments" authorizations. For Codebase projects created from Secured Spaces, if the “Download” permission is disabled on the Secured Space, it will override the profiles’ permissions.If the AI booster is enabled and your authorization allows it, you will be allowed to use a fourth permission "Can Use AI" to control who gets access to GenAI features in your project.The download is also logged in the project activities and notifications email.To-DosThe owner of the project can manage, through user profiles on the “People” page, the “Download attachments” authorization.Similar to Codebase, if AI Booster is enabled on the project, you can toggle "Can Use AI" permission to control who can access GenAI features on To-Dos.Versions ManagerThe invited users keep the “Download source code” and “Metadata” permissions they have on the reference Codebase project.Transformation CenterThe invited users keep the “Download source code”, “Metadata” and "Download attachments" permissions they have on the reference Codebase project. A new “Download outputs” permission can be given to invited users.Similar to Codebase and To-Dos, if AI Booster is enabled on the project, you can toggle "Can Use AI" permission to control who can access GenAI features on To-Dos.To sum up, users of AWS Blu Insights have full control of download permissions. For more information, read our documentation. If you have questions, feel free to reach out.Thank you.

Read more

Entry points per workpackage

We are excited to announce that AWS Blu Insights has added Single Sign-On (SSO) capabilities to improve security and simplify navigation from the parent service (Mainframe Modernization). AWS Blu Insights is now accessible from the AWS Mainframe Modernization Console. Registration on bluinsights.aws is no longer required (and will later be deprecated).We often heard customers asking to use their AWS accounts when working on AWS Blu Age refactoring projects. With this new feature, customers can safely manage their accounts, leveraging all the AWS authentication mechanisms.The migration of the hundreds of legacy AWS Blu Insights accounts is in progress.Want to know more?Start by checking the documentation and the FAQ.

Read more

Say hello to workpackages in Transformation Center

Dear Blu Insighters,Almost all AWS Blu Insights pages allow to see the data in different views. For example, in Codebase > Assets > Files, you can click on the 3 icons on the right of the filter bar to see the same data as a list of files (Files view), as Folders (folders view) and as paths (Paths view). All those views answer different complementary needs for assessments, management, exploration, etc.We keep adding more and more of those views to let you get all the data representations you need. One of the latest view we built is the Workpackages view (in TC > Velocity > Runs). This view is particular as it combines data from different sources, i.e. Codebase (Workpackages) and Transformation Center (Inputs and Runs).Starting from today, you can leverage this view to answer many questions, such as:Which Velocity versions were used to modernize a workpackage?What are the Velocity runs that we’re used to modernize my project?How many lines of codes have been already transformed?…For more details, check the Browse Runs in the Transformation Center documentation.

Read more

Single Sign-on

We are excited to announce that AWS Blu Insights has added Single Sign-On (SSO) capabilities to improve security and simplify navigation from the parent service (Mainframe Modernization). AWS Blu Insights is now accessible from the AWS Mainframe Modernization Console. Registration on bluinsights.aws is no longer required (and will later be deprecated).We often heard customers asking to use their AWS accounts when working on AWS Blu Age refactoring projects. With this new feature, customers can safely manage their accounts, leveraging all the AWS authentication mechanisms.The migration of the hundreds of legacy AWS Blu Insights accounts is in progress.Want to know more?Start by checking the documentation and the FAQ.

Read more

Big graphs just got bigger

Dear Blu Insighters,We’re thrilled to announce that we have just released an exciting new update that will allow you to create and manipulate dependencies graphs up to 3 times larger than before (up to 2 millions vertices and 8 millions edges). 🎉Thanks to this new update, you will no longer be required to split many large projects into smaller ones. This is the first change in many that will drastically improve your experience with AWS Blu Insights, and allow you to focus on providing value to the customer without worrying about hitting limitations.We’re proud to continue pushing the boundaries of what’s possible with our product, and we’re excited to see what you will achieve with this new update. 

Read more