v0.9.4
Rollout
- FFM: February 7, 2024
- MDB: February 22, 2024
PaaS Release v0.9.4
This release of the OSC PaaS layer brings a series of significant updates across various components. Key highlights include TLS fixes, audit capabilities, and numerous bug fixes. There is a notable focus on enhancing security and stability across different modules.
Key Features and Improvements
- Multi-AZ PV Support: Enhanced support for Multi-AZ Persistent Volumes, providing better resilience and efficiency in storage management.
- Gardener Updates: Rebase to Gardener v1.76.4 adding support for Kubernetes 1.27 and updates to various Gardener components, ensuring compatibility with the latest versions.
- Enterprise-Level Security Features: Introduction of T-SEC Enterprise Certificates and other security measures across multiple modules.
- Bug Fixes and Stability Improvements: Various bug fixes have been implemented, particularly around Cilium versioning, dashboard compatibility, and cloud-profile configurations.
- Private Deployments: Supports deployment of all Gardener-Stack cluster endpoints (e.g. Garden Cluster, Seed-Cluster, Shoot-Cluster) on private network ranges.
- Cilium Native Feature Enhanced Overlay Networking and Direct Server Return Load Balancing for cilium, offering optimized traffic management, improved scalability, and superior network performance without the need for tunneling protocols.
T-SEC Enterprise Certificates Feature
We have recently expanded our service offerings by introducing the capability to enable T-SEC Enterprise Certificates for deployed clusters. This feature complements our existing self-signed certificate implementation, traditionally used for internal component communications. The primary benefit of this enhancement is that it allows customers to opt for automatically deployed trusted certificates across all user-facing services, such as the Gardener dashboard, SSO, Dex, and MinIO. This improvement not only strengthens security but also enhances the trustworthiness of interactions with these services.
Multi-AZ Persistent Volume Feature
We are excited to announce the release of the Multi-AZ PV Support feature, a development that has been eagerly awaited. This feature allows for the mounting of volumes across availability zone boundaries, a significant step forward in enhancing data resilience and application flexibility. We have focused on ensuring this integration is seamless and that performance remains reliable, even when volumes extend over multiple availability zones. This advancement is particularly beneficial in environments that require distributed architectures and high availability.
For Shoot clusters that are fully managed by customers, this feature will be effective when customers create new volumes. This approach provides flexibility, allowing customers to migrate their workloads according to their own timelines and strategies. This ensures a smooth transition and minimal disruption to existing operations.
Gardener Kubernetes Audit Log Feature
The Gardener Kubernetes Audit Log extension facilitates the transmission of Kubernetes API-Server audit events to the standard output (stdout). These events are subsequently collected by the Gardener logging stack and can be accessed through the Plutono UI. The Gardener logging stack also manages the separation of audit logs. Currently, we exclusively support stdout log output, although the core audit extension has the capability to handle additional methods, which we categorize as 'experimental' at the moment.
To enable the Gardener Audit Log extension, the following steps are required:
1. Audit Policy Configuration
Begin by creating a ConfigMap
containing the audit policy that will be referenced by the Kubernetes API server. In the example below, we assume that the namespace is named garden-dev
:
apiVersion: v1
kind: ConfigMap
metadata:
name: auditpolicy
namespace: garden-dev
data:
policy: |-
apiVersion: audit.k8s.io/v1
kind: Policy
omitStages:
- "RequestReceived"
rules:
# Log events for users starting with 'oidc:' for groups
- level: Metadata
userGroups:
- "oidc:offline_access"
# Log cluster-admin
- level: Metadata
users:
- "system:cluster-admin"
# Exclude logging for specific user groups
- level: None
userGroups:
- "system:nodes"
- "system:serviceaccounts:*"
# Exclude specific non-resource URLs
- level: None
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
- "/healthz"
- "/readyz"
# Exclude specific resource groups
- level: None
resources:
- group: "coordination.k8s.io"
- group: ""
resources: ["events"]
2. Enable the Gardener Audit Extension
In your shoot configuration, enable the extension and set the auditPolicy
for the Kubernetes API server as shown in the following example:
....
extensions:
- type: audit
providerConfig:
apiVersion: audit.metal.extensions.gardener.cloud/v1alpha1
kind: AuditConfig
webhookMode: blocking
backends:
log:
enabled: true
kubernetes:
kubeAPIServer:
auditConfig:
auditPolicy:
configMapRef:
name: auditpolicy
....
Private Deployment Feature
The Private Gardener option supports the deployment of all Gardener stack cluster endpoints, including the Garden Management Cluster, Seed Cluster, and Shoot Cluster, within private network ranges. In this scenario, an additional requirement arises for customers: they can only access and manage their Shoot clusters through a dedicated VPN or WSA VRF peering. Despite these restrictions, customers still retain the ability to expose their workloads on Shoot clusters either through public LoadBalancers or by maintaining privacy and using internal private load balancers. This deployment also necessitates having separate and non-overlapping network ranges for Garden, Seed, Shoot clusters, and customer networks.
Cilium Native Feature
Cilium Native Feature Enhanced Overlay Networking and Direct Server Return Load Balancing for Cilium, offering optimized traffic management, improved scalability, and superior network performance without the need for tunneling protocols.
Due to a bug in version v0.9.2, the Cilium native configuration failed, causing a fallback to default tunneling, which unnecessarily encapsulated the traffic. After upgrading to v0.9.4, customers will need to update the network configuration on all Shoots. To minimize the number of network configuration variations supported and to ensure optimal customer support, it is crucial that all customers update their clusters.
Current Mode of Operation
kind: Shoot
apiVersion: core.gardener.cloud/v1beta1
metadata:
name: ...
namespace: ...
spec:
kubernetes:
kubeProxy:
mode: IPTables
enabled: false
networking:
type: cilium
providerConfig:
apiVersion: cilium.networking.extensions.gardener.cloud/v1alpha1
kind: NetworkConfig
store: kubernetes
bpfSocketLBHostnsOnly:
enabled: true
pods: ...
nodes: ...
services: ...
Future Mode of Operation
kind: Shoot
apiVersion: core.gardener.cloud/v1beta1
metadata:
name: ...
namespace: ...
spec:
kubernetes:
kubeProxy:
mode: IPTables
enabled: false
networking:
type: cilium
providerConfig:
apiVersion: cilium.networking.extensions.gardener.cloud/v1alpha1
kind: NetworkConfig
store: kubernetes
overlay:
enabled: true
tunnel: disabled
loadBalancingMode: dsr
bpfSocketLBHostnsOnly:
enabled: true
pods: ...
nodes: ...
services: ...
In our latest update to the networking configuration, we've made several key enhancements to improve your network's operation and performance, while continuing to use the cilium
networking type and the existing providerConfig
structure. We've introduced and enabled a new overlay
setting, allowing your network to utilize an overlay network for the efficient encapsulation of network traffic. This update removes the need for tunneling protocols, which were previously used for packet encapsulation in overlay networks, by explicitly disabling the tunnel
setting.
We've also updated the loadBalancingMode
to dsr
(Direct Server Return), a pivotal enhancement that streamlines load balancing. This mode enables return traffic from backend servers to bypass the load balancer, significantly boosting your network's performance. The bpfSocketLBHostnsOnly
setting remains enabled, ensuring that BPF-based socket load balancing within the host namespace continues to provide robust and efficient traffic management.
Following the upgrade to v0.9.4, the ip-masq-agent
ConfigMap will become unnecessary, irrespective of whether the network configuration employs tunneling.
Gardener Updates
We are happy to announce that we updated our Gardener Stack to v1.76.4 adding support for Kubernetes 1.27. Moreover, with this release, the following additional Kubernetes minor releases are now supported: 1.24.17, 1.25.16, 1.26.11, 1.27.8. With this release, the following Kubernetes versions for Gardener shoot cluster are available:
- v1.23 (deprecated)
- version: 1.23.16
- v1.24 (deprecated)
- version: 1.24.10
- version: 1.24.13
- version: 1.24.14
- version: 1.24.17
- v1.25
- version: 1.25.6
- version: 1.25.9
- version: 1.25.10
- version: 1.25.13
- version: 1.25.15
- version: 1.25.16
- v1.26
- version: 1.26.5
- version: 1.26.8
- version: 1.26.10
- version: 1.26.11
- v1.27
- version: 1.27.7
- version: 1.27.8
Note
Deprecated Kubernetes versions will be removed with a future OSC PaaS release. Therefore, it is recommended to upgrade shoot clusters as soon as possible.
Support for new VM flavors
Added the following new VM flavor which can be used for the data plane notes of shoots:
Name | CPU | RAM | EPC |
---|---|---|---|
vm-dedicated-sgx-32-32-32 |
32 | 32 | 32 |
Node Feature Discovery Add-on Feature
With this release, the upstream Node Feature Discovery Add-on is offered as an extension for shoot clusters. The extension is enabled by default and therefore the NFD add-on is automatically available in a shoot cluster. To disable the extension for a shoot it must be explicitly disabled in the list of the shoot extensions as follows:
kind: Shoot
apiVersion: core.gardener.cloud/v1beta1
metadata:
name: ...
namespace: ...
spec:
...
extensions:
- type: osc-nfd-shoot-service
disabled: true
hibernation:
...
Breaking Changes
- Gardener Dashboard: The dashboard URL changed from previously
https://gardener.<cluster>.<region>.osc.live
tohttps://gardener.apps.<cluster.<region>.osc.live/
Deprecated Features
- Feature: Deprecation of the
ip-masq-agent
configuration for the Cilium CNI plugin (see above) - Feature: Support for Kubernetes 1.23.x
- DEPRECATED: 1.2.2024
- WILL BE REMOVED: 1.4.2024
- Mitigation: please migrate to a higher, supported version of Kubernetes.
- Feature: Support for Kubernetes 1.24.x
- DEPRECATED: 1.2.2024
- WILL BE REMOVED: 1.6.2024
- Mitigation: please migrate to a higher-supported version of Kubernetes.