Added gitea action pipeline
This commit is contained in:
parent
560200bb3c
commit
dd088b4d17
23
.gitea/workflows/sonarqube.yaml
Normal file
23
.gitea/workflows/sonarqube.yaml
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
pull_request:
|
||||||
|
types: [opened, synchronize, reopened]
|
||||||
|
|
||||||
|
name: SonarQube Scan
|
||||||
|
jobs:
|
||||||
|
sonarqube:
|
||||||
|
name: SonarQube Trigger
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Checking out
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
# Disabling shallow clone is recommended for improving relevancy of reporting
|
||||||
|
fetch-depth: 0
|
||||||
|
- name: SonarQube Scan
|
||||||
|
uses: kitabisa/sonarqube-action@v1.2.0
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.SONARQUBE_HOST }}
|
||||||
|
login: ${{ secrets.SONARQUBE_TOKEN }}
|
||||||
72
CLAUDE.md
Normal file
72
CLAUDE.md
Normal file
@ -0,0 +1,72 @@
|
|||||||
|
# CLAUDE.md
|
||||||
|
|
||||||
|
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This is a Terraform module for managing vSphere resource pools (resource groups). It creates organized resource pools with CPU and memory resource allocation controls, along with proper tagging for management and organization. The module integrates with vSphere for virtualization and Vault for secrets management.
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
### Terraform Operations
|
||||||
|
- `terraform init` - Initialize the Terraform working directory
|
||||||
|
- `terraform plan` - Create execution plan showing changes
|
||||||
|
- `terraform apply` - Apply the planned changes
|
||||||
|
- `terraform destroy` - Destroy the managed infrastructure
|
||||||
|
- `terraform validate` - Validate configuration syntax
|
||||||
|
- `terraform fmt` - Format configuration files
|
||||||
|
|
||||||
|
### Development Workflow
|
||||||
|
- Always run `terraform validate` and `terraform plan` before applying changes
|
||||||
|
- Use `terraform.tfvars` file for environment-specific variable values
|
||||||
|
- Secrets are managed through Vault - never hardcode sensitive values
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Core Components
|
||||||
|
|
||||||
|
**Resource Pool Management:**
|
||||||
|
- Creates resource pools (`vsphere_resource_pool`) under the compute cluster's default resource pool
|
||||||
|
- Configurable CPU and memory reservations, limits, expandability, and shares
|
||||||
|
- Default resource groups: Kubernetes, Docker, and Infra
|
||||||
|
|
||||||
|
**Tagging System:**
|
||||||
|
- Creates tag categories for Environment and ResourceGroupType
|
||||||
|
- Applies environment and resource group type tags to each resource pool
|
||||||
|
- Enables proper organization and filtering in vSphere
|
||||||
|
|
||||||
|
**Data Sources:**
|
||||||
|
- Retrieves vSphere credentials from Vault
|
||||||
|
- Looks up vSphere datacenter information
|
||||||
|
- References compute cluster "Home" for resource pool parent
|
||||||
|
|
||||||
|
### Variable Structure
|
||||||
|
|
||||||
|
**Key Variables:**
|
||||||
|
- `datacenter`: vSphere datacenter name
|
||||||
|
- `environment`: Environment name (dev, tst, acc, uat, prod, shared, tools)
|
||||||
|
- `resource_groups`: Map of resource groups with CPU/memory configuration
|
||||||
|
- `role_id`/`secret_id`: Vault AppRole authentication (sensitive)
|
||||||
|
|
||||||
|
**Resource Group Configuration:**
|
||||||
|
Each resource group supports:
|
||||||
|
- `name`: Display name for the resource pool
|
||||||
|
- `cpu_reservation`: Guaranteed CPU in MHz (default: 0)
|
||||||
|
- `cpu_expandable`: Allow CPU expansion beyond reservation (default: true)
|
||||||
|
- `cpu_limit`: Maximum CPU in MHz (default: -1, unlimited)
|
||||||
|
- `cpu_shares`: CPU priority (normal, low, high) (default: normal)
|
||||||
|
- `memory_reservation`: Guaranteed memory in MB (default: 0)
|
||||||
|
- `memory_expandable`: Allow memory expansion beyond reservation (default: true)
|
||||||
|
- `memory_limit`: Maximum memory in MB (default: -1, unlimited)
|
||||||
|
- `memory_shares`: Memory priority (normal, low, high) (default: normal)
|
||||||
|
|
||||||
|
### Resource Dependencies
|
||||||
|
|
||||||
|
Resources are created in the following order:
|
||||||
|
1. Tag categories for Environment and ResourceGroupType
|
||||||
|
2. Environment and resource group type tags
|
||||||
|
3. Resource pools with proper tagging
|
||||||
|
|
||||||
|
### Backend Configuration
|
||||||
|
|
||||||
|
Uses S3-compatible backend (MinIO) for state storage with custom endpoint configuration. State file: `home/vsphere/network/vsphere-resourcegroup-config.tfstate`
|
||||||
403
SERVER_ASSIGNMENT.md
Normal file
403
SERVER_ASSIGNMENT.md
Normal file
@ -0,0 +1,403 @@
|
|||||||
|
# Adding Servers to vSphere Resource Groups
|
||||||
|
|
||||||
|
This document provides comprehensive instructions for adding servers (VMs) to vSphere resource groups created by the terraform-vsphere-resourcegroups module.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The terraform-vsphere-resourcegroups module creates resource pools (Kubernetes, Docker, Infra) with proper resource allocation and tagging. To assign VMs to these resource groups, you need to reference the resource pool IDs when creating virtual machines.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **DRS Enabled**: Ensure DRS (Distributed Resource Scheduler) is enabled on your vSphere cluster
|
||||||
|
2. **Resource Groups Created**: The resource groups module must be deployed first
|
||||||
|
3. **Required Data Sources**: Access to vSphere datacenter, cluster, datastore, network, and template data
|
||||||
|
4. **Vault Access**: Credentials for vSphere authentication via Vault
|
||||||
|
|
||||||
|
## Step-by-Step Process
|
||||||
|
|
||||||
|
### Step 1: Deploy Resource Groups Module
|
||||||
|
|
||||||
|
First, ensure the resource groups are created:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Navigate to the resource groups module
|
||||||
|
cd /path/to/terraform-vsphere-resourcegroups
|
||||||
|
|
||||||
|
# Initialize and apply the resource groups
|
||||||
|
terraform init
|
||||||
|
terraform plan
|
||||||
|
terraform apply
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Add Outputs to Resource Groups Module
|
||||||
|
|
||||||
|
Add outputs to the resource groups module to expose resource pool IDs:
|
||||||
|
|
||||||
|
```terraform
|
||||||
|
# outputs.tf
|
||||||
|
output "resource_pool_ids" {
|
||||||
|
description = "Map of resource group names to their resource pool IDs"
|
||||||
|
value = {
|
||||||
|
for k, v in vsphere_resource_pool.resource_groups : k => v.id
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
output "resource_pool_names" {
|
||||||
|
description = "Map of resource group keys to their display names"
|
||||||
|
value = {
|
||||||
|
for k, v in var.resource_groups : k => v.name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Create VM Configuration
|
||||||
|
|
||||||
|
Create a new Terraform configuration for your VMs that references the resource groups:
|
||||||
|
|
||||||
|
```terraform
|
||||||
|
# main.tf - VM deployment
|
||||||
|
terraform {
|
||||||
|
required_providers {
|
||||||
|
vsphere = {
|
||||||
|
source = "hashicorp/vsphere"
|
||||||
|
version = "~> 2.4"
|
||||||
|
}
|
||||||
|
vault = {
|
||||||
|
source = "hashicorp/vault"
|
||||||
|
version = "~> 3.0"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Dynamic resource pool lookup - automatically discovers resource groups based on server configurations
|
||||||
|
data "vsphere_resource_pool" "resource_groups" {
|
||||||
|
for_each = toset([for server in var.servers : server.resource_group])
|
||||||
|
|
||||||
|
name = format("%s/Resources/%s", data.vsphere_compute_cluster.cluster.name, title(each.value))
|
||||||
|
datacenter_id = data.vsphere_datacenter.dc.id
|
||||||
|
}
|
||||||
|
|
||||||
|
# Data sources for VM creation
|
||||||
|
data "vault_generic_secret" "vmware" {
|
||||||
|
path = "secret/vmware"
|
||||||
|
}
|
||||||
|
|
||||||
|
data "vsphere_datacenter" "dc" {
|
||||||
|
name = var.datacenter
|
||||||
|
}
|
||||||
|
|
||||||
|
data "vsphere_compute_cluster" "cluster" {
|
||||||
|
name = var.cluster_name
|
||||||
|
datacenter_id = data.vsphere_datacenter.dc.id
|
||||||
|
}
|
||||||
|
|
||||||
|
data "vsphere_datastore" "datastore" {
|
||||||
|
name = var.datastore
|
||||||
|
datacenter_id = data.vsphere_datacenter.dc.id
|
||||||
|
}
|
||||||
|
|
||||||
|
data "vsphere_network" "network" {
|
||||||
|
name = var.network_name
|
||||||
|
datacenter_id = data.vsphere_datacenter.dc.id
|
||||||
|
}
|
||||||
|
|
||||||
|
data "vsphere_virtual_machine" "template" {
|
||||||
|
name = "/${var.datacenter}/vm/Templates/${var.template_name}"
|
||||||
|
datacenter_id = data.vsphere_datacenter.dc.id
|
||||||
|
}
|
||||||
|
|
||||||
|
# VM creation with resource group assignment
|
||||||
|
resource "vsphere_virtual_machine" "servers" {
|
||||||
|
for_each = var.servers
|
||||||
|
|
||||||
|
name = each.value.name
|
||||||
|
resource_pool_id = data.vsphere_resource_pool.resource_groups[each.value.resource_group].id
|
||||||
|
datastore_id = data.vsphere_datastore.datastore.id
|
||||||
|
|
||||||
|
num_cpus = each.value.cpus
|
||||||
|
memory = each.value.memory
|
||||||
|
|
||||||
|
network_interface {
|
||||||
|
network_id = data.vsphere_network.network.id
|
||||||
|
}
|
||||||
|
|
||||||
|
disk {
|
||||||
|
label = "disk0"
|
||||||
|
thin_provisioned = true
|
||||||
|
size = each.value.disk_size
|
||||||
|
}
|
||||||
|
|
||||||
|
guest_id = data.vsphere_virtual_machine.template.guest_id
|
||||||
|
|
||||||
|
clone {
|
||||||
|
template_uuid = data.vsphere_virtual_machine.template.id
|
||||||
|
|
||||||
|
customize {
|
||||||
|
linux_options {
|
||||||
|
host_name = each.value.name
|
||||||
|
domain = var.domain
|
||||||
|
}
|
||||||
|
|
||||||
|
network_interface {
|
||||||
|
ipv4_address = each.value.ip_address
|
||||||
|
ipv4_netmask = each.value.netmask
|
||||||
|
}
|
||||||
|
|
||||||
|
ipv4_gateway = each.value.gateway
|
||||||
|
dns_server_list = var.dns_servers
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Optional: Apply tags if using the resource groups module for tag management
|
||||||
|
# tags = [
|
||||||
|
# data.terraform_remote_state.resource_groups.outputs.environment_tag_id,
|
||||||
|
# data.terraform_remote_state.resource_groups.outputs.resource_group_tag_ids[each.value.resource_group]
|
||||||
|
# ]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Define Variables
|
||||||
|
|
||||||
|
Create variables for your VM configuration:
|
||||||
|
|
||||||
|
```terraform
|
||||||
|
# variables.tf
|
||||||
|
variable "datacenter" {
|
||||||
|
description = "vSphere data center"
|
||||||
|
type = string
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "cluster_name" {
|
||||||
|
description = "vSphere Cluster Name"
|
||||||
|
type = string
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "environment" {
|
||||||
|
description = "Environment name (dev, tst, acc, uat, prod, shared, tools)"
|
||||||
|
type = string
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "datastore" {
|
||||||
|
description = "vSphere datastore name"
|
||||||
|
type = string
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "network_name" {
|
||||||
|
description = "vSphere network name"
|
||||||
|
type = string
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "template_name" {
|
||||||
|
description = "VM template name"
|
||||||
|
type = string
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "domain" {
|
||||||
|
description = "Domain name"
|
||||||
|
type = string
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "dns_servers" {
|
||||||
|
description = "List of DNS servers"
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "servers" {
|
||||||
|
description = "Map of servers to create"
|
||||||
|
type = map(object({
|
||||||
|
name = string
|
||||||
|
resource_group = string # Must match resource group key (kubernetes, docker, infra)
|
||||||
|
cpus = number
|
||||||
|
memory = number
|
||||||
|
disk_size = number
|
||||||
|
ip_address = string
|
||||||
|
netmask = number
|
||||||
|
gateway = string
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
# Vault variables
|
||||||
|
variable "role_id" {
|
||||||
|
description = "Role ID for Vault AppRole authentication"
|
||||||
|
type = string
|
||||||
|
sensitive = true
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "secret_id" {
|
||||||
|
description = "Secret ID for Vault AppRole authentication"
|
||||||
|
type = string
|
||||||
|
sensitive = true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Configure Server Variables
|
||||||
|
|
||||||
|
Create a terraform.tfvars file with your server configurations:
|
||||||
|
|
||||||
|
```terraform
|
||||||
|
# terraform.tfvars
|
||||||
|
datacenter = "YourDatacenter"
|
||||||
|
cluster_name = "Home"
|
||||||
|
environment = "prod"
|
||||||
|
datastore = "YourDatastore"
|
||||||
|
network_name = "VM Network"
|
||||||
|
template_name = "centos8-template"
|
||||||
|
domain = "example.com"
|
||||||
|
dns_servers = ["8.8.8.8", "8.8.4.4"]
|
||||||
|
|
||||||
|
servers = {
|
||||||
|
web01 = {
|
||||||
|
name = "web01"
|
||||||
|
resource_group = "infra" # Assigns to Infra resource pool
|
||||||
|
cpus = 2
|
||||||
|
memory = 4096
|
||||||
|
disk_size = 50
|
||||||
|
ip_address = "192.168.1.10"
|
||||||
|
netmask = 24
|
||||||
|
gateway = "192.168.1.1"
|
||||||
|
}
|
||||||
|
k8s-master01 = {
|
||||||
|
name = "k8s-master01"
|
||||||
|
resource_group = "kubernetes" # Assigns to Kubernetes resource pool
|
||||||
|
cpus = 4
|
||||||
|
memory = 8192
|
||||||
|
disk_size = 100
|
||||||
|
ip_address = "192.168.1.20"
|
||||||
|
netmask = 24
|
||||||
|
gateway = "192.168.1.1"
|
||||||
|
}
|
||||||
|
docker-host01 = {
|
||||||
|
name = "docker-host01"
|
||||||
|
resource_group = "docker" # Assigns to Docker resource pool
|
||||||
|
cpus = 4
|
||||||
|
memory = 8192
|
||||||
|
disk_size = 80
|
||||||
|
ip_address = "192.168.1.30"
|
||||||
|
netmask = 24
|
||||||
|
gateway = "192.168.1.1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 6: Deploy the VMs
|
||||||
|
|
||||||
|
Deploy your VMs to the resource groups:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Initialize Terraform
|
||||||
|
terraform init
|
||||||
|
|
||||||
|
# Plan the deployment
|
||||||
|
terraform plan
|
||||||
|
|
||||||
|
# Apply the configuration
|
||||||
|
terraform apply
|
||||||
|
```
|
||||||
|
|
||||||
|
## Alternative Methods
|
||||||
|
|
||||||
|
### Method 1: Static Data Sources (Less Preferred)
|
||||||
|
|
||||||
|
You can define separate data sources for each resource group, but this requires pre-defining every resource group:
|
||||||
|
|
||||||
|
```terraform
|
||||||
|
data "vsphere_resource_pool" "kubernetes" {
|
||||||
|
name = format("%s/Resources/%s", data.vsphere_compute_cluster.cluster.name, "Kubernetes")
|
||||||
|
datacenter_id = data.vsphere_datacenter.dc.id
|
||||||
|
}
|
||||||
|
|
||||||
|
data "vsphere_resource_pool" "docker" {
|
||||||
|
name = format("%s/Resources/%s", data.vsphere_compute_cluster.cluster.name, "Docker")
|
||||||
|
datacenter_id = data.vsphere_datacenter.dc.id
|
||||||
|
}
|
||||||
|
|
||||||
|
data "vsphere_resource_pool" "infra" {
|
||||||
|
name = format("%s/Resources/%s", data.vsphere_compute_cluster.cluster.name, "Infra")
|
||||||
|
datacenter_id = data.vsphere_datacenter.dc.id
|
||||||
|
}
|
||||||
|
|
||||||
|
# Use in VM resource
|
||||||
|
resource "vsphere_virtual_machine" "server" {
|
||||||
|
resource_pool_id = data.vsphere_resource_pool.kubernetes.id
|
||||||
|
# ... rest of configuration
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 2: Using Remote State
|
||||||
|
|
||||||
|
If your resource groups are in a separate Terraform state:
|
||||||
|
|
||||||
|
```terraform
|
||||||
|
data "terraform_remote_state" "resource_groups" {
|
||||||
|
backend = "s3"
|
||||||
|
config = {
|
||||||
|
bucket = "your-terraform-state-bucket"
|
||||||
|
key = "home/vsphere/network/vsphere-resourcegroup-config.tfstate"
|
||||||
|
endpoint = "your-minio-endpoint"
|
||||||
|
# ... other S3 config
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "vsphere_virtual_machine" "server" {
|
||||||
|
resource_pool_id = data.terraform_remote_state.resource_groups.outputs.resource_pool_ids["kubernetes"]
|
||||||
|
# ... rest of configuration
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Resource Group Options
|
||||||
|
|
||||||
|
The module creates three default resource groups:
|
||||||
|
|
||||||
|
- **kubernetes**: For Kubernetes cluster nodes
|
||||||
|
- **docker**: For Docker hosts and container workloads
|
||||||
|
- **infra**: For infrastructure services (databases, monitoring, etc.)
|
||||||
|
|
||||||
|
You can customize these or add additional resource groups by modifying the `resource_groups` variable in the module.
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
After deployment, verify your VMs are in the correct resource pools:
|
||||||
|
|
||||||
|
1. **vSphere Client**: Check the resource pool assignments in the vSphere web client
|
||||||
|
2. **Terraform State**: Use `terraform show` to verify resource pool IDs
|
||||||
|
3. **Tags**: Verify that VMs have the correct environment and resource group tags
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
1. **DRS Not Enabled**: Ensure DRS is enabled on the cluster
|
||||||
|
2. **Resource Pool Not Found**: Verify resource group module has been applied
|
||||||
|
3. **Permission Issues**: Check vSphere permissions for resource pool operations
|
||||||
|
4. **Network Configuration**: Verify network settings and IP assignments
|
||||||
|
|
||||||
|
### Debug Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check resource pool IDs
|
||||||
|
terraform output -module=resource_groups
|
||||||
|
|
||||||
|
# Verify vSphere connectivity
|
||||||
|
terraform plan -target=data.vsphere_datacenter.dc
|
||||||
|
|
||||||
|
# Check VM status
|
||||||
|
terraform show
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Resource Allocation**: Set appropriate CPU and memory limits per resource group
|
||||||
|
2. **Tagging**: Use consistent tagging for organization and automation
|
||||||
|
3. **Naming**: Use descriptive names for VMs and follow naming conventions
|
||||||
|
4. **Documentation**: Document resource group assignments and purposes
|
||||||
|
5. **Monitoring**: Monitor resource utilization across resource groups
|
||||||
|
6. **Backup**: Ensure Terraform state files are properly backed up
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
1. **Vault Secrets**: Never hardcode credentials; always use Vault
|
||||||
|
2. **State Security**: Secure Terraform state files (use encryption)
|
||||||
|
3. **Access Control**: Implement proper RBAC for resource pool management
|
||||||
|
4. **Network Security**: Configure appropriate network segmentation
|
||||||
|
|
||||||
|
This documentation provides all the necessary steps and code examples to successfully assign servers to vSphere resource groups using Terraform.
|
||||||
Loading…
x
Reference in New Issue
Block a user