A brief introduction on watsonx Code Assistant & Terraform: A New Era of Intelligent Infrastructure Automation
Introduction
Terraform stands out as a powerful tool for operations teams, enabling the automation of infrastructure provisioning in an industrialized and scalable manner. Its declarative approach allows users to define the desired state of their infrastructure, and Terraform handles the complexities of achieving that state. Unlike tools such as Pulumi, which necessitate proficiency in programming languages like Go or Python, Terraform significantly boosts productivity without requiring any programming language knowledge. Instead, it leverages a scripting-like syntax, empowering operations professionals to build robust infrastructures declaratively.
However, for individuals unfamiliar with Terraform’s specific file syntax, achieving rapid productivity can be challenging, as it involves a learning curve. This is precisely where a Code Assistant can become a remarkable productivity accelerator. watsonx Code Assistant, powered by the Granite Large Language Model (LLM) and trained on verified and secure data, offers an excellent solution to rapidly enhance productivity for both newcomers and experienced Terraform users. For those new to Terraform, it can demystify the syntax and suggest correct configurations, while seasoned professionals can leverage it to accelerate their development workflows and move forward more swiftly.
Code Generation example and use case
Building on the idea of accelerating productivity, consider a scenario where an operations engineer needs to provision an IBM Kubernetes Service (IKS) cluster on IBM Cloud. Without a code assistant, this would typically involve sifting through documentation, understanding the various Terraform resources for IBM Cloud, and meticulously crafting the HCL (HashiCorp Configuration Language) code, which can be time-consuming and prone to syntax errors. However, with a tool like watsonx Code Assistant integrated into their VS Code environment, the user could simply type a natural language prompt such as: “Generate a Terraform module which provisions an IKS instance on IBM Cloud.” The AI, leveraging its training on verified and secure data, would then generate the appropriate Terraform code for the IKS cluster, complete with the necessary resources, variables, and outputs, significantly reducing the manual effort and potential for mistakes. This immediate generation of functional code allows the engineer to bypass the initial learning curve of syntax and directly focus on customizing the cluster to their specific requirements, thereby accelerating the entire infrastructure provisioning process.
Generate a terraform module which priovisions an IKS instance on IBM Cloud
The sample output;
Here's a basic example of a Terraform module that provisions an IBM Kubernetes Service (IKS) cluster. This example assumes you have the IBM Cloud CLI and Terraform installed, and you've logged in to your IBM Cloud account.
// Assisted by watsonx Code Assistant
// Code generated by WCA@IBM .
provider "ibm" {
version = "~> 2.0"
}
resource "ibm_container_service_cluster" "example" {
name = "example-cluster"
location = "dal10"
kubernetes_version = "1.21.3"
machine_pool {
name = "default"
instance_type = "b4i.metal.extra-large"
node_count = 3
}
addons {
name = "ingress"
version = "1.0"
}
}
output "kubeconfig" {
value = ibm_container_service_cluster.example.kubeconfig
}
This Terraform configuration does the following:
- Defines the IBM Cloud provider.
- Creates an IKS cluster named example-cluster in the dal10 location (Dallas region).
- Configures the cluster to use Kubernetes version 1.21.3.
- Sets up a machine pool with 3 nodes of type b4i.metal.extra-large.
- Enables the Ingress addon for the cluster.
- Outputs the kubeconfig file for the cluster, which is necessary for interacting with the cluster.
- To use this module, save it as main.tfin a new directory, then initialize Terraform and apply the configuration:
// Assisted by watsonx Code Assistant
// Code generated by WCA@IBM in this programming language is not approved for use in IBM product development.
terraform init
terraform apply
After running terraform apply, you'll see output detailing the resources being created. Once the process is complete, you can use the
kubeconfig output to interact with your new IKS cluster.
Remember to replace the values for name, location, kubernetes_version, machine_pool.instance_type Copy to clipboard, and
machine_pool.node_count with your desired settings.
For more information on IBM Cloud provider and IKS resources, refer to the official Terraform IBM Cloud provider documentation: https://registry.terraform.io/providers/IBM-Cloud/ibm/latest/docs/resources/container_service_cluster
Now let’s test a prompt with some more complications.
Generate a terraform module provisions a VPC on IBM cloud and then provision an OpenShift cluster with 3 worker nodes
// Assisted by watsonx Code Assistant
// Code generated by WCA@IBM.
provider "ibm" {
version = "~> 2.0"
}
resource "ibm_vpc_subnet" "example" {
name = "example-subnet"
resource_group_name = ibm_resource_group.example.name
zone = "dal05"
ipv4_cidr_block = "10.0.0.0/24"
}
resource "ibm_resource_group" "example" {
name = "example-resource-group"
}
resource "ibm_vpc_instance_network_interface" "example" {
vpc_instance {
id = ibm_vpc_instance.example.id
}
subnet {
id = ibm_vpc_subnet.example.id
}
}
resource "ibm_vpc_instance" "example" {
name = "example-instance"
resource_group_name = ibm_resource_group.example.name
profile {
name = "bx2-2x8"
}
network_interfaces = [ibm_vpc_instance_network_interface.example.id]
}
resource "ibm_container_service_cluster" "example" {
name = "example-cluster"
location = "dal10"
kubernetes_version = "4.8.14"
machine_pool {
name = "worker"
instance_type = "bx2-2x8"
node_count = 3
}
vpc_config {
subnet_ids = [ibm_vpc_subnet.example.id]
}
}
output "kubeconfig" {
value = ibm_container_service_cluster.example.kubeconfig
}
Disclaimer: All code generated by generative AI tools must be tested before being put into production. 🚩
Understanding Existing Terraform Code
Beyond generating new infrastructure, a code assistant proves invaluable for deciphering pre-existing Terraform scripts. This is a common scenario in collaborative environments or when inheriting legacy infrastructure. Rather than meticulously tracing through lines of HCL to understand resource dependencies, variable inputs, and output values, a user can leverage the code assistant. For example, if presented with an unfamiliar main.tf file, a user could simply prompt: "Explain what this Terraform script does." The assistant would then analyze the code, providing a clear, concise summary of the resources being provisioned, their configurations, and any relationships between them. This capability significantly reduces the time and effort required for onboarding new team members, troubleshooting existing deployments, or simply gaining a quick understanding of complex infrastructure definitions without deep diving into every line of code. It effectively acts as a built-in documentation generator, accelerating comprehension and fostering greater operational efficiency.
Here is the prompt;
Explain what the following Terraform script does;
terraform {
required_version = ">= 0.12"
}
provider "azurerm" {
features {}
}
locals {
common_app_settings = {
"INSTRUMENTATIONKEY" = azurerm_application_insights.appinsight-app.instrumentation_key
}
}
resource "azurerm_resource_group" "rg-app" {
name = "${var.resource_group_name}-${var.environment}"
location = var.location
tags = {
ENV = var.environment
}
}
resource "azurerm_app_service_plan" "plan-app" {
name = "${var.service_plan_name}-${var.environment}"
location = azurerm_resource_group.rg-app.location
resource_group_name = azurerm_resource_group.rg-app.name
sku {
tier = "Standard"
size = "S1"
}
tags = {
ENV = var.environment
CreatedBy = var.createdby
}
}
resource "azurerm_app_service" "app" {
name = "${var.app_name}-${var.environment}"
location = azurerm_resource_group.rg-app.location
resource_group_name = azurerm_resource_group.rg-app.name
app_service_plan_id = azurerm_app_service_plan.plan-app.id
}
resource "azurerm_application_insights" "appinsight-app" {
name = "${var.app_name}-${var.environment}"
location = azurerm_resource_group.rg-app.location
resource_group_name = azurerm_resource_group.rg-app.name
application_type = "web"
tags = {
ENV = var.environment
CreatedBy = var.createdby
}
}
And the output ⬇️
This Terraform script provisions an Azure Resource Manager (ARM) template to create an Azure App Service and an Application Insights instance. Here's a breakdown of what each resource does:
terraform { required_version = ">= 0.12" }
- This line specifies the minimum required Terraform version for this configuration.
provider "azurerm"
- This block defines the Azure provider, which is used to interact with Azure resources.
locals { ... }
- This block defines local values, which are variables that are only available within this Terraform configuration. In this case, it creates a map called
common_app_settings
that contains an
INSTRUMENTATIONKEY
value from the
azurerm_application_insights
resource.
resource "azurerm_resource_group" "rg-app"
- This resource creates an Azure Resource Group named using the
var.resource_group_name
and
var.environment
variables. The location is set using the
var.location
variable, and it has a tag for the environment.
resource "azurerm_app_service_plan" "plan-app"
- This resource creates an Azure App Service Plan named using the
var.service_plan_name
and
var.environment
variables. The location is set to the resource group's location, and it belongs to the resource group created earlier. The SKU is set to "Standard" with a size of "S1". It also has tags for the environment and the creator.
resource "azurerm_app_service" "app"
- This resource creates an Azure App Service named using the
var.app_name
and
var.environment
variables. The location is set to the resource group's location, and it belongs to the resource group created earlier. The App Service is associated with the App Service Plan created earlier.
resource "azurerm_application_insights" "appinsight-app"
- This resource creates an Azure Application Insights instance named using the var.app_name
and var.environment
variables. The location is set to the resource group's location, and it belongs to the resource group created earlier. The application type is set to "web", and it has tags for the environment and the creator.
The var. prefix in front of the variable names indicates that these are input variables that need to be provided when running the Terraform configuration.
To use this module, save it as main.tf in a new directory, then initialize Terraform and apply the configuration:
# Assisted by watsonx Code Assistant
# Code generated by WCA@IBM
terraform init
terraform apply
After running
terraform apply
, you'll see output detailing the resources being created. Once the process is complete, you'll have an Azure App Service and an Application Insights instance set up in the specified resource group and location.
Conclusion
These two simple use-cases — generating Terraform scripts and retro-commenting existing code — powerfully demonstrate how an intelligent code assistant can significantly accelerate the productivity of those responsible for infrastructure automation at scale. In this example, we’ve seen how IBM watsonx Code Assistant, leveraging the open Granite model trained on confident and secure code, can be combined with the declarative power of IBM and HashiCorp’s Terraform. This combination brings increased agility and fosters innovation within the domain of hybrid cloud deployments, enabling teams to automate more effectively and efficiently.